375 resultados para SYSTEMATIC-ERROR CORRECTION
em Queensland University of Technology - ePrints Archive
Resumo:
As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.
Resumo:
Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.
Resumo:
Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
Read the original article via 'related work' link below
Resumo:
An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.
Resumo:
This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].
Resumo:
This study investigates the short-run dynamics and long-run equilibrium relationship between residential electricity demand and factors influencing demand - per capita income, price of electricity, price of kerosene oil and price of liquefied petroleum gas - using annual data for Sri Lanka for the period, 1960-2007. The study uses unit root, cointegration and error-correction models. The long-run demand elasticities of income, own price and price of kerosene oil (substitute) were estimated to be 0.78, - 0.62, and 0.14 respectively. The short-run elasticities for the same variables were estimated to be 032, - 0.16 and 0.10 respectively. Liquefied petroleum (LP) gas is a substitute for electricity only in the short-run with an elasticity 0.09. The main findings of the paper support the following (1) increasing the price of electricity is not the most effective tool to reduce electricity consumption (2) existing subsidies on electricity consumption can be removed without reducing government revenue (3) the long-run income elasticity of demand shows that any future increase in household incomes is likely to significantly increase the demand for electricity and(4) any power generation plans which consider only current per capita consumption and population growth should be revised taking into account the potential future income increases in order to avoid power shortages ill the country.
Resumo:
Secret-sharing schemes describe methods to securely share a secret among a group of participants. A properly constructed secret-sharing scheme guarantees that the share belonging to one participant does not reveal anything about the shares of others or even the secret itself. Besides the obvious feature which is to distribute a secret, secret-sharing schemes have also been used in secure multi-party computations and redundant residue number systems for error correction codes. In this paper, we propose that the secret-sharing scheme be used as a primitive in a Network-based Intrusion Detection System (NIDS) to detect attacks in encrypted networks. Encrypted networks such as Virtual Private Networks (VPNs) fully encrypt network traffic which can include both malicious and non-malicious traffic. Traditional NIDS cannot monitor encrypted traffic. Our work uses a combination of Shamir's secret-sharing scheme and randomised network proxies to enable a traditional NIDS to function normally in a VPN environment. In this paper, we introduce a novel protocol that utilises a secret-sharing scheme to detect attacks in encrypted networks.
Resumo:
Secret-sharing schemes describe methods to securely share a secret among a group of participants. A properly constructed secret-sharing scheme guarantees that the share belonging to one participant does not reveal anything about the shares of others or even the secret itself. Besides being used to distribute a secret, secret-sharing schemes have also been used in secure multi-party computations and redundant residue number systems for error correction codes. In this paper, we propose that the secret-sharing scheme be used as a primitive in a Network-based Intrusion Detection System (NIDS) to detect attacks in encrypted Networks. Encrypted networks such as Virtual Private Networks (VPNs) fully encrypt network traffic which can include both malicious and non-malicious traffic. Traditional NIDS cannot monitor such encrypted traffic. We therefore describe how our work uses a combination of Shamir's secret-sharing scheme and randomised network proxies to enable a traditional NIDS to function normally in a VPN environment.