28 resultados para Error Correction Coding, Error Resilience, MPEG-4, Video Coding
em University of Queensland eSpace - Australia
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A quantum circuit implementing 5-qubit quantum-error correction on a linear-nearest-neighbor architecture is described. The canonical decomposition is used to construct fast and simple gates that incorporate the necessary swap operations allowing the circuit to achieve the same depth as the current least depth circuit. Simulations of the circuit's performance when subjected to discrete and continuous errors are presented. The relationship between the error rate of a physical qubit and that of a logical qubit is investigated with emphasis on determining the concatenated error correction threshold.
Resumo:
We describe an implementation of quantum error correction that operates continuously in time and requires no active interventions such as measurements or gates. The mechanism for carrying away the entropy introduced by errors is a cooling procedure. We evaluate the effectiveness of the scheme by simulation, and remark on its connections to some recently proposed error prevention procedures.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
A framework for developing marketing category management decision support systems (DSS) based upon the Bayesian Vector Autoregressive (BVAR) model is extended. Since the BVAR model is vulnerable to permanent and temporary shifts in purchasing patterns over time, a form that can correct for the shifts and still provide the other advantages of the BVAR is a Bayesian Vector Error-Correction Model (BVECM). We present the mechanics of extending the DSS to move from a BVAR model to the BVECM model for the category management problem. Several additional iterative steps are required in the DSS to allow the decision maker to arrive at the best forecast possible. The revised marketing DSS framework and model fitting procedures are described. Validation is conducted on a sample problem.
Resumo:
Operator quantum error correction is a recently developed theory that provides a generalized and unified framework for active error correction and passive error avoiding schemes. In this Letter, we describe these codes using the stabilizer formalism. This is achieved by adding a gauge group to stabilizer codes that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Resumo:
This paper is an expanded and more detailed version of the work [1] in which the Operator Quantum Error Correction formalism was introduced. This is a new scheme for the error correction of quantum operations that incorporates the known techniques - i.e. the standard error correction model, the method of decoherence-free subspaces, and the noiseless subsystem method - as special cases, and relies on a generalized mathematical framework for noiseless subsystems that applies to arbitrary quantum operations. We also discuss a number of examples and introduce the notion of unitarily noiseless subsystems.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
Dizziness and/or unsteadiness are common symptoms of chronic whiplash-associated disorders. This study aimed to report the characteristics of these symptoms and determine whether there was any relationship to cervical joint position error. Joint position error, the accuracy to return to the natural head posture following extension and rotation, was measured in 102 subjects with persistent whiplash-associated disorder and 44 control subjects. Whiplash subjects completed a neck pain index and answered questions about the characteristics of dizziness. The results indicated that subjects with whiplash-associated disorders had significantly greater joint position errors than control subjects. Within the whiplash group, those with dizziness had greater joint position errors than those without dizziness following rotation (rotation (R) 4.5degrees (0.3) vs 2.9degrees (0.4); rotation (L) 3.9degrees (0.3) vs 2.8degrees (0.4) respectively) and a higher neck pain index (55.3% (1.4) vs 43.1% (1.8)). Characteristics of the dizziness were consistent for those reported for a cervical cause but no characteristics could predict the magnitude of joint position error. Cervical mechanoreceptor dysfunction is a likely cause of dizziness in whiplash-associated disorder.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.