29 resultados para quantum error correction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Category-management models serve to assist in the development of plans for pricing and promotions of individual brands. Techniques to solve the models can have problems of accuracy and interpretability because they are susceptible to spurious regression problems due to nonstationary time-series data. Improperly stated nonstationary systems can reduce the accuracy of the forecasts and undermine the interpretation of the results. This is problematic because recent studies indicate that sales are often a nonstationary time-series. Newly developed correction techniques can account for nonstationarity by incorporating error-correction terms into the model when using a Bayesian Vector Error-Correction Model. The benefit of using such a technique is that shocks to control variates can be separated into permanent and temporary effects and allow cointegration of series for analysis purposes. Analysis of a brand data set indicates that this is important even at the brand level. Thus, additional information is generated that allows a decision maker to examine controllable variables in terms of whether they influence sales over a short or long duration. Only products that are nonstationary in sales volume can be manipulated for long-term profit gain, and promotions must be cointegrated with brand sales volume. The brand data set is used to explore the capabilities and interpretation of cointegration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the hypotheses that the recently established Mexican stock index futures market effectively serves the price discovery function, and that the introduction of futures trading has provoked volatility in the underlying spot market. We test both hypotheses simultaneously with daily data from Mexico in the context of a modified EGARCH model that also incorporates possible cointegration between the futures and spot markets. The evidence supports both hypotheses, suggesting that the futures market in Mexico is a useful price discovery vehicle, although futures trading has also been a source of instability for the spot market. Several managerial implications are derived and discussed. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Euro has been used as the largest weighting element in a basket of currencies for forex arrangements adopted by several Central European countries outside the European Union (EU). The paper uses a new time-series approach to examine the relationship between the Euro exchange rate and the level of foreign reserves. It employs Zero-no-zero (ZNZ) patterned vector error-correction (VECM) modelling to investigate Granger causal relations among foreign reserves, the European Monetary Union money supply and the Euro exchange rate. The findings confirm that foreign reserves may influence movements in the Euro's exchange rate. Further, ZNZ patterned VECM modelling with exogenous variables is used to estimate the amount of foreign reserves currently required in order to again achieve a targetted Euro exchange rate

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reinvestigates the energy consumption-GDP growth nexus in a panel error correction model using data on 20 net energy importers and exporters from 1971 to 2002. Among the energy exporters, there was bidirectional causality between economic growth and energy consumption in the developed countries in both the short and long run, while in the developing countries energy consumption stimulates growth only in the short run. The former result is also found for energy importers and the latter result exists only for the developed countries within this category. In addition, compared to the developing countries, the developed countries' elasticity response in terms of economic growth from an increase in energy consumption is larger although its income elasticity is lower and less than unitary. Lastly. the implications for energy policy calling for a more holistic approach are discussed. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We define several quantitative measures of the robustness of a quantum gate against noise. Exact analytic expressions for the robustness against depolarizing noise are obtained for all bipartite unitary quantum gates, and it is found that the controlled-NOT gate is the most robust two-qubit quantum gate, in the sense that it is the quantum gate which can tolerate the most depolarizing noise and still generate entanglement. Our results enable us to place several analytic upper bounds on the value of the threshold for quantum computation, with the best bound in the most pessimistic error model being p(th)less than or equal to0.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we investigate the effect of dephasing on proposed quantum gates for the solid-state Kane quantum computing architecture. Using a simple model of the decoherence, we find that the typical error in a controlled-NOT gate is 8.3x10(-5). We also compute the fidelities of Z, X, swap, and controlled Z operations under a variety of dephasing rates. We show that these numerical results are comparable with the error threshold required for fault tolerant quantum computation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

First principles simulations of the quantum dynamics of interacting Bose gases using the stochastic gauge representation are analysed. In a companion paper, we showed how the positive-P representation can be applied to these problems using stochastic differential equations. That method, however, is limited by increased sampling error as time evolves. Here, we show how the sampling error can be greatly reduced and the simulation time significantly extended using stochastic gauges. In particular, local stochastic gauges (a subset) are investigated. Improvements are confirmed in numerical calculations of single-, double- and multi-mode systems in the weak-mode coupling regime. Convergence issues are investigated, including the recognition of two modes by which stochastic equations produced by phase-space methods in general can diverge: movable singularities and a noise-weight relationship. The example calculated here displays wave-like behaviour in spatial correlation functions propagating in a uniform 1D gas after a sudden change in the coupling constant. This could in principle be tested experimentally using Feshbach resonance methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method is introduced to determine the nuclear magnetic resonance frequency of a donor (P-31) doped inside a silicon substrate under the influence of an applied electric field. This phosphorus donor has been suggested for operation as a qubit for the realization of a solid-state scalable quantum computer. The operation of the qubit is achieved by a combination of the rotation of the phosphorus nuclear spin through a globally applied magnetic field and the selection of the phosphorus nucleus through a locally applied electric field. To realize the selection function, it is required to know the relationship between the applied electric field and the change of the nuclear magnetic resonance frequency of phosphorus. In this study, based on the wave functions obtained by the effective-mass theory, we introduce an empirical correction factor to the wave functions at the donor nucleus. Using the corrected wave functions, we formulate a first-order perturbation theory for the perturbed system under the influence of an electric field. In order to calculate the potential distributions inside the silicon and the silicon dioxide layers due to the applied electric field, we use the multilayered Green's functions and solve an integral equation by the moment method. This enables us to consider more realistic, arbitrary shape, and three-dimensional qubit structures. With the calculation of the potential distributions, we have investigated the effects of the thicknesses of silicon and silicon dioxide layers, the relative position of the donor, and the applied electric field on the nuclear magnetic resonance frequency of the donor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article demonstrates that a commonly-made assumption in quantum yield calculations may produce errors of up to 25% in extreme cases and can be corrected by a simple modification to the analysis.