917 resultados para vector error correction model
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.
Resumo:
The first chapter provides evidence that aggregate Research and Development (R&D) investment drives a persistent component in productivity growth and that this embodies a risk priced in financial markets. In a semi-endogenous growth model, this component is identified by the R&D in excess of equilibrium levels and can be approximated by the Error Correction Term in the cointegration between R&D and Total Factor Productivity. Empirically, the component results being well defined and it satisfies all key theoretical predictions: it exhibits appropriate persistency, it forecasts productivity growth, and it is associated with a cross-sectional risk premium. CAPM is the most foundational model in financial economics, but is known to empirically underestimate expected returns of low-risk assets and overestimate those with high risk. The second chapter studies how risks omission and funding tightness jointly contribute to explaining this anomaly, with the former affecting the definition of assets’ riskiness and the latter affecting how risk is remunerated. Theoretically, the two effects are shown to counteract each other. Empirically, the spread related to binding leverage constraints is found to be significant at 2% yearly. Nonetheless, average returns of portfolios that exploit this anomaly are found to mostly reflect omitted risks, in contrast to their employment in previous literature. The third chapter studies how ‘sustainability’ of assets affect discount rates, which is intrinsically mediated by the risk profile of the assets themselves. This has implications for the assessment of the sustainability-related spread and for hedging changes in the sustainability concern. This mechanism is tested on the ESG-score dimension for US data, with inconclusive evidence regarding the existence of an ESG-related premium in the first place. Also, the risk profile of the long-short ESG portfolio is not likely to impact the sign of its average returns with respect to the sustainability-spread, for the time being.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
In this paper we provide a recipe for state protection in a network of oscillators under collective damping and diffusion. Our strategy is to manipulate the network topology, i.e., the way the oscillators are coupled together, the strength of their couplings, and their natural frequencies, in order to create a relaxation-diffusion-free channel. This protected channel defines a decoherence-free subspace (DFS) for nonzero-temperature reservoirs. Our development also furnishes an alternative approach to build up DFSs that offers two advantages over the conventional method: it enables the derivation of all the network-protected states at once, and also reveals, through the network normal modes, the mechanism behind the emergence of these protected domains.
Resumo:
We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.
Resumo:
The infection of insect cells with baculovirus was described in a mathematical model as a part of the structured dynamic model describing whole animal cell metabolism. The model presented here is capable of simulating cell population dynamics, the concentrations of extracellular and intracellular viral components, and the heterologous product titers. The model describes the whole processes of viral infection and the effect of the infection on the host cell metabolism. Dynamic simulation of the model in batch and fed-batch mode gave good agreement between model predictions and experimental data. Optimum conditions for insect cell culture and viral infection in batch and fed-batch culture were studied using the model.
Resumo:
We discuss quantum error correction for errors that occur at random times as described by, a conditional Poisson process. We shoo, how a class of such errors, detected spontaneous emission, can be corrected by continuous closed loop, feedback.
Resumo:
The purpose of this paper is to analyze the dynamics of national saving-investment relationship in order to determine the degree of capital mobility in 12 Latin American countries. The analytically relevant correlation is the short-term one, defined as that between changes in saving and investment. Of special interest is the speed at which variables return to the long run equilibrium relationship, which is interpreted as being negatively related to the degree of capital mobility. The long run correlation, in turn, captures the coefficient implied by the solvency constraint. We find that heterogeneity and cross-section dependence completely change the estimation of the long run coefficient. Besides we obtain a more precise short run coefficient estimate compared to the existent estimates in the literature. There is evidence of an intermediate degree of capital mobility, and the coefficients are extremely stable over time.
Resumo:
This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We propose two quantum error-correction schemes which increase the maximum storage time for qubits in a system of cold-trapped ions, using a minimal number of ancillary qubits. Both schemes consider only the errors introduced by the decoherence due to spontaneous emission from the upper levels of the ions. Continuous monitoring of the ion fluorescence is used in conjunction with selective coherent feedback to eliminate these errors immediately following spontaneous emission events.
Resumo:
We examine constraints on quantum operations imposed by relativistic causality. A bipartite superoperator is said to be localizable if it can be implemented by two parties (Alice and Bob) who share entanglement but do not communicate, it is causal if the superoperator does not convey information from Alice to Bob or from Bob to Alice. We characterize the general structure of causal complete-measurement superoperators, and exhibit examples that are causal but not localizable. We construct another class of causal bipartite superoperators that are not localizable by invoking bounds on the strength of correlations among the parts of a quantum system. A bipartite superoperator is said to be semilocalizable if it can be implemented with one-way quantum communication from Alice to Bob, and it is semicausal if it conveys no information from Bob to Alice. We show that all semicausal complete-measurement superoperators are semi localizable, and we establish a general criterion for semicausality. In the multipartite case, we observe that a measurement superoperator that projects onto the eigenspaces of a stabilizer code is localizable.
Resumo:
Wootters [Phys. Rev. Lett. 80, 2245 (1998)] has given an explicit formula for the entanglement of formation of two qubits in terms of what he calls the concurrence of the joint density operator. Wootters's concurrence is defined with the help of the superoperator that flips the spin of a qubit. We generalize the spin-flip superoperator to a universal inverter, which acts on quantum systems of arbitrary dimension, and we introduce the corresponding generalized concurrence for joint pure states of D-1 X D-2 bipartite quantum systems. We call this generalized concurrence the I concurrence to emphasize its relation to the universal inverter. The universal inverter, which is a positive, but not completely positive superoperator, is closely related to the completely positive universal-NOT superoperator, the quantum analogue of a classical NOT gate. We present a physical realization of the universal-NOT Superoperator.
Resumo:
Hoje em dia, há cada vez mais informação audiovisual e as transmissões ou ficheiros multimédia podem ser partilhadas com facilidade e eficiência. No entanto, a adulteração de conteúdos vídeo, como informação financeira, notícias ou sessões de videoconferência utilizadas num tribunal, pode ter graves consequências devido à importância desse tipo de informação. Surge então, a necessidade de assegurar a autenticidade e a integridade da informação audiovisual. Nesta dissertação é proposto um sistema de autenticação de vídeo H.264/Advanced Video Coding (AVC), denominado Autenticação de Fluxos utilizando Projecções Aleatórias (AFPA), cujos procedimentos de autenticação, são realizados ao nível de cada imagem do vídeo. Este esquema permite um tipo de autenticação mais flexível, pois permite definir um limite máximo de modificações entre duas imagens. Para efectuar autenticação é utilizada uma nova técnica de autenticação de imagens, que combina a utilização de projecções aleatórias com um mecanismo de correcção de erros nos dados. Assim é possível autenticar cada imagem do vídeo, com um conjunto reduzido de bits de paridade da respectiva projecção aleatória. Como a informação de vídeo é tipicamente, transportada por protocolos não fiáveis pode sofrer perdas de pacotes. De forma a reduzir o efeito das perdas de pacotes, na qualidade do vídeo e na taxa de autenticação, é utilizada Unequal Error Protection (UEP). Para validação e comparação dos resultados implementou-se um sistema clássico que autentica fluxos de vídeo de forma típica, ou seja, recorrendo a assinaturas digitais e códigos de hash. Ambos os esquemas foram avaliados, relativamente ao overhead introduzido e da taxa de autenticação. Os resultados mostram que o sistema AFPA, utilizando um vídeo com qualidade elevada, reduz o overhead de autenticação em quatro vezes relativamente ao esquema que utiliza assinaturas digitais e códigos de hash.