865 resultados para DNA Error Correction
Resumo:
Historic analysis of the inflation hedging properties of stocks produced anomalous results, with equities often appearing to offer a perverse hedge against inflation. This has been attributed to the impact of real and monetary shocks to the economy, which influence both inflation and asset returns. It has been argued that real estate should provide a better hedge: however, empirical results have been mixed. This paper explores the relationship between commercial real estate returns (from both private and public markets) and economic, fiscal and monetary factors and inflation for US and UK markets. Comparative analysis of general equity and small capitalisation stock returns in both markets is carried out. Inflation is subdivided into expected and unexpected components using different estimation techniques. The analyses are undertaken using long-run error correction techniques. In the long-run, once real and monetary variables are included, asset returns are positively linked to anticipated inflation but not to inflation shocks. Adjustment processes are, however, gradual and not within period. Real estate returns, particularly direct market returns, exhibit characteristics that differ from equities.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
Proactive motion in hand tracking and in finger bending, in which the body motion occurs prior to the reference signal, was reported by the preceding researchers when the target signals were shown to the subjects at relatively high speed or high frequencies. These phenomena indicate that the human sensory-motor system tends to choose an anticipatory mode rather than a reactive mode, when the target motion is relatively fast. The present research was undertaken to study what kind of mode appears in the sensory-motor system when two persons were asked to track the hand position of the partner with each other at various mean tracking frequency. The experimental results showed a transition from a mutual error-correction mode to a synchronization mode occurred in the same region of the tracking frequency with that of the transition from a reactive error-correction mode to a proactive anticipatory mode in the mechanical target tracking experiments. Present research indicated that synchronization of body motion occurred only when both of the pair subjects operated in a proactive anticipatory mode. We also presented mathematical models to explain the behavior of the error-correction mode and the synchronization mode.
Resumo:
This paper considers supply dynamics in the context of the Irish residential market. The analysis, in a multiple error-correction framework, reveals that although developers did respond to disequilibrium in supply, the rate of adjustment was relatively slow. In contrast, however, disequilibrium in demand did not impact upon supply, suggesting that inelastic supply conditions could explain the prolonged nature of the boom in the Irish market. Increased elasticity in the later stages of the boom may have been a contributory factor in the extent of the house price falls observed in recent years.
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
Currently, there has been an increasing demand for operational and trustworthy digital data transmission and storage systems. This demand has been augmented by the appearance of large-scale, high-speed data networks for the exchange, processing and storage of digital information in the different spheres. In this paper, we explore a way to achieve this goal. For given positive integers n,r, we establish that corresponding to a binary cyclic code C0[n,n-r], there is a binary cyclic code C[(n+1)3k-1,(n+1)3k-1-3kr], where k is a nonnegative integer, which plays a role in enhancing code rate and error correction capability. In the given scheme, the new code C is in fact responsible to carry data transmitted by C0. Consequently, a codeword of the code C0 can be encoded by the generator matrix of C and therefore this arrangement for transferring data offers a safe and swift mode. © 2013 SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
While beneficially decreasing the necessary incision size, arthroscopic hip surgery increases the surgical complexity due to loss of joint visibility. To ease such difficulty, a computer-aided mechanical navigation system was developed to present the location of the surgical tool relative to the patient¿s hip joint. A preliminary study reduced the position error of the tracking linkage with limited static testing trials. In this study, a correction method, including a rotational correction factor and a length correction function, was developed through more in-depth static testing. The developed correction method was then applied to additional static and dynamic testing trials to evaluate its effectiveness. For static testing, the position error decreased from an average of 0.384 inches to 0.153 inches, with an error reduction of 60.5%. Three parameters utilized to quantify error reduction of dynamic testing did not show consistent results. The vertex coordinates achieved 29.4% of error reduction, yet with large variation in the upper vertex. The triangular area error was reduced by 5.37%, however inconsistent among all five dynamic trials. Error of vertex angles increased, indicating a shape torsion using the developed correction method. While the established correction method effectively and consistently reduced position error in static testing, it did not present consistent results in dynamic trials. More dynamic paramters should be explored to quantify error reduction of dynamic testing, and more in-depth dynamic testing methodology should be conducted to further improve the accuracy of the computer-aided nagivation system.
Resumo:
A quantum circuit implementing 5-qubit quantum-error correction on a linear-nearest-neighbor architecture is described. The canonical decomposition is used to construct fast and simple gates that incorporate the necessary swap operations allowing the circuit to achieve the same depth as the current least depth circuit. Simulations of the circuit's performance when subjected to discrete and continuous errors are presented. The relationship between the error rate of a physical qubit and that of a logical qubit is investigated with emphasis on determining the concatenated error correction threshold.
Resumo:
We describe an implementation of quantum error correction that operates continuously in time and requires no active interventions such as measurements or gates. The mechanism for carrying away the entropy introduced by errors is a cooling procedure. We evaluate the effectiveness of the scheme by simulation, and remark on its connections to some recently proposed error prevention procedures.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.