49 resultados para ERROR-CORRECTION MODEL
Resumo:
A central problem in visual perception concerns how humans perceive stable and uniform object colors despite variable lighting conditions (i.e. color constancy). One solution is to 'discount' variations in lighting across object surfaces by encoding color contrasts, and utilize this information to 'fill in' properties of the entire object surface. Implicit in this solution is the caveat that the color contrasts defining object boundaries must be distinguished from the spurious color fringes that occur naturally along luminance-defined edges in the retinal image (i.e. optical chromatic aberration). In the present paper, we propose that the neural machinery underlying color constancy is complemented by an 'error-correction' procedure which compensates for chromatic aberration, and suggest that error-correction may be linked functionally to the experimentally induced illusory colored aftereffects known as McCollough effects (MEs). To test these proposals, we develop a neural network model which incorporates many of the receptive-field (RF) profiles of neurons in primate color vision. The model is composed of two parallel processing streams which encode complementary sets of stimulus features: one stream encodes color contrasts to facilitate filling-in and color constancy; the other stream selectively encodes (spurious) color fringes at luminance boundaries, and learns to inhibit the filling-in of these colors within the first stream. Computer simulations of the model illustrate how complementary color-spatial interactions between error-correction and filling-in operations (a) facilitate color constancy, (b) reveal functional links between color constancy and the ME, and (c) reconcile previously reported anomalies in the local (edge) and global (spreading) properties of the ME. We discuss the broader implications of these findings by considering the complementary functional roles performed by RFs mediating color-spatial interactions in the primate visual system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper investigates the hypotheses that the recently established Mexican stock index futures market effectively serves the price discovery function, and that the introduction of futures trading has provoked volatility in the underlying spot market. We test both hypotheses simultaneously with daily data from Mexico in the context of a modified EGARCH model that also incorporates possible cointegration between the futures and spot markets. The evidence supports both hypotheses, suggesting that the futures market in Mexico is a useful price discovery vehicle, although futures trading has also been a source of instability for the spot market. Several managerial implications are derived and discussed. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
Resumo:
We demonstrate a quantum error correction scheme that protects against accidental measurement, using a parity encoding where the logical state of a single qubit is encoded into two physical qubits using a nondeterministic photonic controlled-NOT gate. For the single qubit input states vertical bar 0 >, vertical bar 1 >, vertical bar 0 > +/- vertical bar 1 >, and vertical bar 0 > +/- i vertical bar 1 > our encoder produces the appropriate two-qubit encoded state with an average fidelity of 0.88 +/- 0.03 and the single qubit decoded states have an average fidelity of 0.93 +/- 0.05 with the original state. We are able to decode the two-qubit state (up to a bit flip) by performing a measurement on one of the qubits in the logical basis; we find that the 64 one-qubit decoded states arising from 16 real and imaginary single-qubit superposition inputs have an average fidelity of 0.96 +/- 0.03.
Resumo:
We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.
Resumo:
We discuss quantum error correction for errors that occur at random times as described by, a conditional Poisson process. We shoo, how a class of such errors, detected spontaneous emission, can be corrected by continuous closed loop, feedback.
Resumo:
We propose two quantum error-correction schemes which increase the maximum storage time for qubits in a system of cold-trapped ions, using a minimal number of ancillary qubits. Both schemes consider only the errors introduced by the decoherence due to spontaneous emission from the upper levels of the ions. Continuous monitoring of the ion fluorescence is used in conjunction with selective coherent feedback to eliminate these errors immediately following spontaneous emission events.
Resumo:
We examine constraints on quantum operations imposed by relativistic causality. A bipartite superoperator is said to be localizable if it can be implemented by two parties (Alice and Bob) who share entanglement but do not communicate, it is causal if the superoperator does not convey information from Alice to Bob or from Bob to Alice. We characterize the general structure of causal complete-measurement superoperators, and exhibit examples that are causal but not localizable. We construct another class of causal bipartite superoperators that are not localizable by invoking bounds on the strength of correlations among the parts of a quantum system. A bipartite superoperator is said to be semilocalizable if it can be implemented with one-way quantum communication from Alice to Bob, and it is semicausal if it conveys no information from Bob to Alice. We show that all semicausal complete-measurement superoperators are semi localizable, and we establish a general criterion for semicausality. In the multipartite case, we observe that a measurement superoperator that projects onto the eigenspaces of a stabilizer code is localizable.
Resumo:
Wootters [Phys. Rev. Lett. 80, 2245 (1998)] has given an explicit formula for the entanglement of formation of two qubits in terms of what he calls the concurrence of the joint density operator. Wootters's concurrence is defined with the help of the superoperator that flips the spin of a qubit. We generalize the spin-flip superoperator to a universal inverter, which acts on quantum systems of arbitrary dimension, and we introduce the corresponding generalized concurrence for joint pure states of D-1 X D-2 bipartite quantum systems. We call this generalized concurrence the I concurrence to emphasize its relation to the universal inverter. The universal inverter, which is a positive, but not completely positive superoperator, is closely related to the completely positive universal-NOT superoperator, the quantum analogue of a classical NOT gate. We present a physical realization of the universal-NOT Superoperator.
Resumo:
How useful is a quantum dynamical operation for quantum information processing? Motivated by this question, we investigate several strength measures quantifying the resources intrinsic to a quantum operation. We develop a general theory of such strength measures, based on axiomatic considerations independent of state-based resources. The power of this theory is demonstrated with applications to quantum communication complexity, quantum computational complexity, and entanglement generation by unitary operations.
Resumo:
What quantum states are possible energy eigenstates of a many-body Hamiltonian? Suppose the Hamiltonian is nontrivial, i.e., not a multiple of the identity, and L local, in the sense of containing interaction terms involving at most L bodies, for some fixed L. We construct quantum states psi which are far away from all the eigenstates E of any nontrivial L-local Hamiltonian, in the sense that parallel topsi-Eparallel to is greater than some constant lower bound, independent of the form of the Hamiltonian.
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
The Euro has been used as the largest weighting element in a basket of currencies for forex arrangements adopted by several Central European countries outside the European Union (EU). The paper uses a new time-series approach to examine the relationship between the Euro exchange rate and the level of foreign reserves. It employs Zero-no-zero (ZNZ) patterned vector error-correction (VECM) modelling to investigate Granger causal relations among foreign reserves, the European Monetary Union money supply and the Euro exchange rate. The findings confirm that foreign reserves may influence movements in the Euro's exchange rate. Further, ZNZ patterned VECM modelling with exogenous variables is used to estimate the amount of foreign reserves currently required in order to again achieve a targetted Euro exchange rate
Resumo:
Understanding the contribution of marketing to economic and social outcomes is fundamental to broadening the focus of marketing. The authors develop a comprehensive model that integrates the impact of service quality and service satisfaction on both economic and societal outcomes. The model is validated using two random samples involving intensive health services. The results indicate that service quality and service satisfaction significantly enhance quality of life and behavioral intentions, highlighting that customer service has social as well as economic outcomes. This is an important finding given the movement toward recognizing social and environmental outcomes, such as emphasized through triple bottom-line reporting. The findings have important implications for managing service processes, for improving the quality of life of customers, and for enhancing customers' behavioral intentions toward the organization.
Resumo:
We show how to communicate Heisenberg-limited continuous (quantum) variables between Alice and Bob in the case where they occupy two inertial reference frames that differ by an unknown Lorentz boost. There are two effects that need to be overcome: the Doppler shift and the absence of synchronized clocks. Furthermore, we show how Alice and Bob can share Doppler-invariant entanglement, and we demonstrate that the protocol is robust under photon loss.