22 resultados para Self-exchange Rates
em CentAUR: Central Archive University of Reading - UK
Resumo:
Many recent papers have documented periodicities in returns, return volatility, bid–ask spreads and trading volume, in both equity and foreign exchange markets. We propose and employ a new test for detecting subtle periodicities in time series data based on a signal coherence function. The technique is applied to a set of seven half-hourly exchange rate series. Overall, we find the signal coherence to be maximal at the 8-h and 12-h frequencies. Retaining only the most coherent frequencies for each series, we implement a trading rule that is based on these observed periodicities. Our results demonstrate in all cases except one that, in gross terms, the rules can generate returns that are considerably greater than those of a buy-and-hold strategy, although they cannot retain their profitability net of transactions costs. We conjecture that this methodology could constitute an important tool for financial market researchers which will enable them to detect, quantify and rank the various periodic components in financial data better.
Resumo:
This paper builds upon previous research on currency bands, and provides a model for the Colombian peso. Stochastic differential equations are combined with information related to the Colombian currency band to estimate competing models of the behaviour of the Colombian peso within the limits of the currency band. The resulting moments of the density function for the simulated returns describe adequately most of the characteristics of the sample returns data. The factor included to account for the intra-marginal intervention performed to drive the rate towards the Central Parity accounts only for 6.5% of the daily change, which supports the argument that intervention, if performed by the Central Bank, it is not directed to push the currency towards the limits. Moreover, the credibility of the Colombian Central Bank, Banco de la República’s ability to defend the band seems low.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
This paper proposes two new tests for linear and nonlinear lead/lag relationships between time series based on the concepts of cross-correlations and cross-bicorrelations, respectively. The tests are then applied to a set of Sterling-denominated exchange rates. Our analysis indicates that there existed periods during the post-Bretton Woods era where the temporal relationship between different exchange rates was strong, although these periods have become less frequent over the past 20 years. In particular, our results demonstrate the episodic nature of the nonlinearity, and have implications for the speed of flow of information between financial series. The method generalises recently proposed tests for nonlinearity to the multivariate context.
Resumo:
We examine a method recently proposed by Hinich and Patterson (mimeo, University of Texas at Austin, 1995) for testing the validity of specifying a GARCH error structure for financial time series data in the context of a set of ten daily Sterling exchange rates. The results demonstrate that there are statistical structures present in the data that cannot be captured by a GARCH model, or any of its variants. This result has important implications for the interpretation of the recent voluminous literature which attempts to model financial asset returns using this family of models.
Resumo:
This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.
Resumo:
A number of tests for non-linear dependence in time series are presented and implemented on a set of 10 daily sterling exchange rates covering the entire post Bretton-Woods era until the present day. Irrefutable evidence of non-linearity is shown in many of the series, but most of this dependence can apparently be explained by reference to the GARCH family of models. It is suggested that the literature in this area has reached an impasse, with the presence of ARCH effects clearly demonstrated in a large number of papers, but with the tests for non-linearity which are currently available being unable to classify any additional non-linear structure.
Resumo:
An alternative procedure to that of Lo is proposed for assessing whether there is significant evidence of persistence in time series. The technique estimates the Hurst exponent itself, and significance testing is based on an application of bootstrapping using surrogate data. The method is applied to a set of 10 daily pound exchange rates. A general lack of long-term memory is found to characterize all the series tested, in sympathy with the findings of a number of other recent papers which have used Lo's techniques.
Resumo:
A major gap in our understanding of the medieval economy concerns interest rates, especially relating to commercial credit. Although direct evidence about interest rates is scattered and anecdotal, there is much more surviving information about exchange rates. Since both contemporaries and historians have suggested that exchange and rechange transactions could be used to disguise the charging of interest in order to circumvent the usury prohibition, it should be possible to back out the interest rates from exchange rates. The following analysis is based on a new dataset of medieval exchange rates collected from commercial correspondence in the archive of Francesco di Marco Datini of Prato, c.1383-1411. It demonstrates that the time value of money was consistently incorporated into market exchange rates. Moreover, these implicit interest rates are broadly comparable to those received from other types of commercial loan and investment. Although on average profitable, the return on any individual exchange and rechange transaction did involve a degree of uncertainty that may have justified their non-usurious nature. However, there were also practical reasons why medieval merchants may have used foreign exchange transactions as a means of extending credit.
Resumo:
This chapter applies rigorous statistical analysis to existing datasets of medieval exchange rates quoted in merchants’ letters sent from Barcelona, Bruges and Venice between 1380 and 1310, which survive in the archive of Francesco di Marco Datini of Prato. First, it tests the exchange rates for stationarity. Second, it uses regression analysis to examine the seasonality of exchange rates at the three financial centres and compares them against contemporary descriptions by the merchant Giovanni di Antonio da Uzzano. Third, it tests for structural breaks in the exchange rate series.
Resumo:
This paper tests directly for deterministic chaos in a set of ten daily Sterling-denominated exchange rates by calculating the largest Lyapunov exponent. Although in an earlier paper, strong evidence of nonlinearity has been shown, chaotic tendencies are noticeably absent from all series considered using this state-of-the-art technique. Doubt is cast on many recent papers which claim to have tested for the presence of chaos in economic data sets, based on what are argued here to be inappropriate techniques.
Resumo:
Globally there have been a number of concerns about the development of genetically modified crops many of which relate to the implications of gene flow at various levels. In Europe these concerns have led the European Union (EU) to promote the concept of 'coexistence' to allow the freedom to plant conventional and genetically modified (GM) varieties but to minimise the presence of transgenic material within conventional crops. Should a premium for non-GM varieties emerge on the market, the presence of transgenes would generate a 'negative externality' to conventional growers. The establishment of maximum tolerance level for the adventitious presence of GM material in conventional crops produces a threshold effect in the external costs. The existing literature suggests that apart from the biological characteristics of the plant under consideration (e.g. self-pollination rates, entomophilous species, anemophilous species, etc.), gene flow at the landscape level is affected by the relative size of the source and sink populations and the spatial arrangement of the fields in the landscape. In this paper, we take genetically modified herbicide tolerant oilseed rape (GM HT OSR) as a model crop. Starting from an individual pollen dispersal function, we develop a spatially explicit numerical model in order to assess the effect of the size of the source/sink populations and the degree of spatial aggregation on the extent of gene flow into conventional OSR varieties under two alternative settings. We find that when the transgene presence in conventional produce is detected at the field level, the external cost will increase with the size of the source area and with the level of spatial disaggregation. on the other hand when the transgene presence is averaged among all conventional fields in the landscape (e.g. because of grain mixing before detection), the external cost will only depend on the relative size of the source area. The model could readily be incorporated into an economic evaluation of policies to regulate adoption of GM HT OSR. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Proposals have been made for a common currency for East Asia, but the countries preparing to participate need to be in a state of economic convergence. We show that at least six countries of East Asia already satisfy this condition. There also needs to be a mechanism by which the new currency relates to other reserve currencies. We demonstrate that a numéraire could be defined solely from the actual worldwide consumption of food and energy per capita, linked to fiat currencies via world market prices. We show that real resource prices are stable in real terms, and likely to remain so. Furthermore, the link from energy prices to food commodity prices is permanent, arising from energy inputs in agriculture, food processing and distribu-tion. Calibration of currency value using a yardstick such as our SI numéraire offers an unbiased measure of the con-sistently stable cost of subsistence in the face of volatile currency exchange rates. This has the advantage that the par-ticipating countries need only agree to currency governance based on a common standards institution, a much less on-erous form of agreement than would be required in the creation of a common central bank.
Resumo:
Using monthly time-series data 1999-2013, the paper shows that markets for agricultural commodities provide a yardstick for real purchasing power, and thus a reference point for the real value of fiat currencies. The daily need for each adult to consume about 2800 food calories is universal; data from FAO food balance sheets confirm that the world basket of food consumed daily is non-volatile in comparison to the volatility of currency exchange rates, and so the replacement cost of food consumed provides a consistent indicator of economic value. Food commodities are storable for short periods, but ultimately perishable, and this exerts continual pressure for markets to clear in the short term; moreover, food calories can be obtained from a very large range of foodstuffs, and so most households are able to use arbitrage to select a near optimal weighting of quantities purchased. The paper proposes an original method to enable a standard of value to be established, definable in physical units on the basis of actual worldwide consumption of food goods, with an illustration of the method.