6 resultados para ERROR-CORRECTION MODEL

em Universidade Complutense de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last interglacial (Eemian, 125,000 years ago) has generally been considered the warmest time period in the last 200,000 years and thus sometimes been used as a reference for greenhouse projections. Herein we report results from a coupled ocean-atmosphere climate model of the surface temperature response to changes in the radiative forcing at the last interglacial. Although the model generates the expected summer warming in the northern hemisphere, winter cooling of a comparable magnitude occurs over North Africa and tropical Asia. The global annual mean temperature for the Eemian run is 0.3 degrees C cooler than the control run. Validation of simulated sea surface temperatures (SSTs) against reconstructed SSTs supports this conclusion and also the assumption that the flux correction, fitted for the present state, operates satisfactorily for modest perturbations. Our results imply that contrary to conventional expectations, Eemian global temperatures may already have been reached by the mid 20th century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By performing a high-statistics simulation of the D = 4 random-field Ising model at zero temperature for different shapes of the random-field distribution, we show that the model is ruled by a single universality class. We compute to a high accuracy the complete set of critical exponents for this class, including the correction-to-scaling exponent. Our results indicate that in four dimensions (i) dimensional reduction as predicted by the perturbative renormalization group does not hold and (ii) three independent critical exponents are needed to describe the transition.