835 resultados para Error correction codes
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
Category-management models serve to assist in the development of plans for pricing and promotions of individual brands. Techniques to solve the models can have problems of accuracy and interpretability because they are susceptible to spurious regression problems due to nonstationary time-series data. Improperly stated nonstationary systems can reduce the accuracy of the forecasts and undermine the interpretation of the results. This is problematic because recent studies indicate that sales are often a nonstationary time-series. Newly developed correction techniques can account for nonstationarity by incorporating error-correction terms into the model when using a Bayesian Vector Error-Correction Model. The benefit of using such a technique is that shocks to control variates can be separated into permanent and temporary effects and allow cointegration of series for analysis purposes. Analysis of a brand data set indicates that this is important even at the brand level. Thus, additional information is generated that allows a decision maker to examine controllable variables in terms of whether they influence sales over a short or long duration. Only products that are nonstationary in sales volume can be manipulated for long-term profit gain, and promotions must be cointegrated with brand sales volume. The brand data set is used to explore the capabilities and interpretation of cointegration.
Resumo:
This paper investigates the hypotheses that the recently established Mexican stock index futures market effectively serves the price discovery function, and that the introduction of futures trading has provoked volatility in the underlying spot market. We test both hypotheses simultaneously with daily data from Mexico in the context of a modified EGARCH model that also incorporates possible cointegration between the futures and spot markets. The evidence supports both hypotheses, suggesting that the futures market in Mexico is a useful price discovery vehicle, although futures trading has also been a source of instability for the spot market. Several managerial implications are derived and discussed. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Euro has been used as the largest weighting element in a basket of currencies for forex arrangements adopted by several Central European countries outside the European Union (EU). The paper uses a new time-series approach to examine the relationship between the Euro exchange rate and the level of foreign reserves. It employs Zero-no-zero (ZNZ) patterned vector error-correction (VECM) modelling to investigate Granger causal relations among foreign reserves, the European Monetary Union money supply and the Euro exchange rate. The findings confirm that foreign reserves may influence movements in the Euro's exchange rate. Further, ZNZ patterned VECM modelling with exogenous variables is used to estimate the amount of foreign reserves currently required in order to again achieve a targetted Euro exchange rate
Resumo:
In this paper we investigate the effect of dephasing on proposed quantum gates for the solid-state Kane quantum computing architecture. Using a simple model of the decoherence, we find that the typical error in a controlled-NOT gate is 8.3x10(-5). We also compute the fidelities of Z, X, swap, and controlled Z operations under a variety of dephasing rates. We show that these numerical results are comparable with the error threshold required for fault tolerant quantum computation.
Resumo:
We discuss the long-distance transmission of qubits encoded in optical coherent states. Through absorption, these qubits suffer from two main types of errors, namely the reduction of the amplitude of the coherent states and accidental application of the Pauli Z operator. We show how these errors can be fixed using techniques of teleportation and error-correcting codes.
Resumo:
We show how to communicate Heisenberg-limited continuous (quantum) variables between Alice and Bob in the case where they occupy two inertial reference frames that differ by an unknown Lorentz boost. There are two effects that need to be overcome: the Doppler shift and the absence of synchronized clocks. Furthermore, we show how Alice and Bob can share Doppler-invariant entanglement, and we demonstrate that the protocol is robust under photon loss.
Resumo:
We show that the classification of bipartite pure entangled states when local quantum operations are restricted yields a structure that is analogous in many respects to that of mixed-state entanglement. Specifically, we develop this analogy by restricting operations through local superselection rules, and show that such exotic phenomena as bound entanglement and activation arise using pure states in this setting. This analogy aids in resolving several conceptual puzzles in the study of entanglement under restricted operations. In particular, we demonstrate that several types of quantum optical states that possess confusing entanglement properties are analogous to bound entangled states. Also, the classification of pure-state entanglement under restricted operations can be much simpler than for mixed-state entanglement. For instance, in the case of local Abelian superselection rules all questions concerning distillability can be resolved.
Resumo:
This paper reinvestigates the energy consumption-GDP growth nexus in a panel error correction model using data on 20 net energy importers and exporters from 1971 to 2002. Among the energy exporters, there was bidirectional causality between economic growth and energy consumption in the developed countries in both the short and long run, while in the developing countries energy consumption stimulates growth only in the short run. The former result is also found for energy importers and the latter result exists only for the developed countries within this category. In addition, compared to the developing countries, the developed countries' elasticity response in terms of economic growth from an increase in energy consumption is larger although its income elasticity is lower and less than unitary. Lastly. the implications for energy policy calling for a more holistic approach are discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
This study focuses on: (i) the responsiveness of the U.S. financial sector stock indices to foreign exchange (FX) and interest rate changes; and, (ii) the extent to which good model specification can enhance the forecasts from the associated models. Three models are considered. Only the error-correction model (ECM) generated efficient and consistent coefficient estimates. Furthermore, a simple zero lag model in differences which is clearly mis-specified, generated forecasts that are better than those of the ECM, even if the ECM depicts relationships that are more consistent with economic theory. In brief, FX and interest rate changes do not impact on the return-generating process of the stock indices in any substantial way. Most of the variation in the sector stock indices is associated with past variation in the indices themselves and variation in the market-wide stock index. These results have important implications for financial and economic policies.
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
Typical properties of sparse random matrices over finite (Galois) fields are studied, in the limit of large matrices, using techniques from the physics of disordered systems. For the case of a finite field GF(q) with prime order q, we present results for the average kernel dimension, average dimension of the eigenvector spaces and the distribution of the eigenvalues. The number of matrices for a given distribution of entries is also calculated for the general case. The significance of these results to error-correcting codes and random graphs is also discussed.
Resumo:
There is a growing demand for data transmission over digital networks involving mobile terminals. An important class of data required for transmission over mobile terminals is image information such as street maps, floor plans and identikit images. This sort of transmission is of particular interest to the service industries such as the Police force, Fire brigade, medical services and other services. These services cannot be applied directly to mobile terminals because of the limited capacity of the mobile channels and the transmission errors caused by the multipath (Rayleigh) fading. In this research, transmission of line diagram images such as floor plans and street maps, over digital networks involving mobile terminals at transmission rates of 2400 bits/s and 4800 bits/s have been studied. A low bit-rate source encoding technique using geometric codes is found to be suitable to represent line diagram images. In geometric encoding, the amount of data required to represent or store the line diagram images is proportional to the image detail. Thus a simple line diagram image would require a small amount of data. To study the effect of transmission errors due to mobile channels on the transmitted images, error sources (error files), which represent mobile channels under different conditions, have been produced using channel modelling techniques. Satisfactory models of the mobile channel have been obtained when compared to the field test measurements. Subjective performance tests have been carried out to evaluate the quality and usefulness of the received line diagram images under various mobile channel conditions. The effect of mobile transmission errors on the quality of the received images has been determined. To improve the quality of the received images under various mobile channel conditions, forward error correcting codes (FEC) with interleaving and automatic repeat request (ARQ) schemes have been proposed. The performance of the error control codes have been evaluated under various mobile channel conditions. It has been shown that a FEC code with interleaving can be used effectively to improve the quality of the received images under normal and severe mobile channel conditions. Under normal channel conditions, similar results have been obtained when using ARQ schemes. However, under severe mobile channel conditions, the FEC code with interleaving shows better performance.
Resumo:
Partial information leakage in deterministic public-key cryptosystems refers to a problem that arises when information about either the plaintext or the key is leaked in subtle ways. Quite a common case is where there are a small number of possible messages that may be sent. An attacker may be able to crack the scheme simply by enumerating all the possible ciphertexts. Two methods are proposed for facing the partial information leakage problem in RSA that incorporate a random element into the encrypted message to increase the number of possible ciphertexts. The resulting scheme is, effectively, an RSA-like cryptosystem which exhibits probabilistic encryption. The first method involves encrypting several similar messages with RSA and then using the Quadratic Residuosity Problem (QRP) to mark the intended one. In this way, an adversary who has correctly guessed two or more of the ciphertexts is still in doubt about which message is the intended one. The cryptographic strength of the combined system is equal to the computational difficulty of factorising a large integer; ideally, this should be feasible. The second scheme uses error-correcting codes for accommodating the random component. The plaintext is processed with an error-correcting code and deliberately corrupted before encryption. The introduced corruption lies within the error-correcting ability of the code, so as to enable the recovery of the original message. The random corruption offers a vast number of possible ciphertexts corresponding to a given plaintext; hence an attacker cannot deduce any useful information from it. The proposed systems are compared to other cryptosystems sharing similar characteristics, in terms of execution time and ciphertext size, so as to determine their practical utility. Finally, parameters which determine the characteristics of the proposed schemes are also examined.