151 resultados para futures price volatility
Resumo:
In a symmetric differentiated experimental duopoly we test the ability of Price Guarantees (PGs) to raise prices above the competitive levels. Different types of PGs (‘aggressive’ and ‘soft’ price-beating and price-matching) are implemented either as an exogenously imposed market rule or as a business strategy. Our results show that PGs may lead close to the collusive outcome, depending on whether the interaction between duopolists is repeated and provided that the guarantee is not of the ‘aggressive’ price-beating type.
Resumo:
We report experimental results on duopoly pricing with and without price beating guarantees (PBG). In two control treatments, price beating is either imposed as an industry-wide rule or offered as a business strategy. Our major finding is that when price beating guarantees are imposed as a rule or offered as an option, effective prices are equal to or lower than those in a baseline treatment in which price beating is forbidden. Also, when price beating is treated as a business strategy, less than 50% of subjects adopted the guarantee, suggesting that, subjects realize the pro-competitive effects of the guarantee.
Resumo:
Often, firms have no information on the specification of the true demand model they are faced with. It is, however, a well established fact that trial-and-error algorithms may be used by them in order to learn how to make optimal decisions. Using experimental methods, we identify a property of the information on past actions which helps the seller of two asymmetric demand substitutes to reach the optimal prices more precisely and faster. The property concerns the possibility of disaggregating changes in each product’s demand into client exit/entry and shift from one product to the other.
Resumo:
We focus on the learning dynamics in multiproduct price-setting markets, where firms use past strategies and performance to adapt to the corresponding equilibrium.
Resumo:
Objectives To model the impact on chronic disease of a tax on UK food and drink that internalises the wider costs to society of greenhouse gas (GHG) emissions and to estimate the potential revenue. Design An econometric and comparative risk assessment modelling study. Setting The UK. Participants The UK adult population. Interventions Two tax scenarios are modelled: (A) a tax of £2.72/tonne carbon dioxide equivalents (tCO2e)/100 g product applied to all food and drink groups with above average GHG emissions. (B) As with scenario (A) but food groups with emissions below average are subsidised to create a tax neutral scenario. Outcome measures Primary outcomes are change in UK population mortality from chronic diseases following the implementation of each taxation strategy, the change in the UK GHG emissions and the predicted revenue. Secondary outcomes are the changes to the micronutrient composition of the UK diet. Results Scenario (A) results in 7770 (95% credible intervals 7150 to 8390) deaths averted and a reduction in GHG emissions of 18 683 (14 665to 22 889) ktCO2e/year. Estimated annual revenue is £2.02 (£1.98 to £2.06) billion. Scenario (B) results in 2685 (1966 to 3402) extra deaths and a reduction in GHG emissions of 15 228 (11 245to 19 492) ktCO2e/year. Conclusions Incorporating the societal cost of GHG into the price of foods could save 7770 lives in the UK each year, reduce food-related GHG emissions and generate substantial tax revenue. The revenue neutral scenario (B) demonstrates that sustainability and health goals are not always aligned. Future work should focus on investigating the health impact by population subgroup and on designing fiscal strategies to promote both sustainable and healthy diets.
Resumo:
A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
In the absence of market frictions, the cost-of-carry model of stock index futures pricing predicts that returns on the underlying stock index and the associated stock index futures contract will be perfectly contemporaneously correlated. Evidence suggests, however, that this prediction is violated with clear evidence that the stock index futures market leads the stock market. It is argued that traditional tests, which assume that the underlying data generating process is constant, might be prone to overstate the lead-lag relationship. Using a new test for lead-lag relationships based on cross correlations and cross bicorrelations it is found that, contrary to results from using the traditional methodology, periods where the futures market leads the cash market are few and far between and when any lead-lag relationship is detected, it does not last long. Overall, the results are consistent with the prediction of the standard cost-of-carry model and market efficiency.
Resumo:
Speculative bubbles are generated when investors include the expectation of the future price in their information set. Under these conditions, the actual market price of the security, that is set according to demand and supply, will be a function of the future price and vice versa. In the presence of speculative bubbles, positive expected bubble returns will lead to increased demand and will thus force prices to diverge from their fundamental value. This paper investigates whether the prices of UK equity-traded property stocks over the past 15 years contain evidence of a speculative bubble. The analysis draws upon the methodologies adopted in various studies examining price bubbles in the general stock market. Fundamental values are generated using two models: the dividend discount and the Gordon growth. Variance bounds tests are then applied to test for bubbles in the UK property asset prices. Finally, cointegration analysis is conducted to provide further evidence on the presence of bubbles. Evidence of the existence of bubbles is found, although these appear to be transitory and concentrated in the mid-to-late 1990s.
Resumo:
This paper models the transmission of shocks between the US, Japanese and Australian equity markets. Tests for the existence of linear and non-linear transmission of volatility across the markets are performed using parametric and non-parametric techniques. In particular the size and sign of return innovations are important factors in determining the degree of spillovers in volatility. It is found that a multivariate asymmetric GARCH formulation can explain almost all of the non-linear causality between markets. These results have important implications for the construction of models and forecasts of international equity returns.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Resumo:
This paper explores a number of statistical models for predicting the daily stock return volatility of an aggregate of all stocks traded on the NYSE. An application of linear and non-linear Granger causality tests highlights evidence of bidirectional causality, although the relationship is stronger from volatility to volume than the other way around. The out-of-sample forecasting performance of various linear, GARCH, EGARCH, GJR and neural network models of volatility are evaluated and compared. The models are also augmented by the addition of a measure of lagged volume to form more general ex-ante forecasting models. The results indicate that augmenting models of volatility with measures of lagged volume leads only to very modest improvements, if any, in forecasting performance.
Resumo:
This article examines the role of idiosyncratic volatility in explaining the cross-sectional variation of size- and value-sorted portfolio returns. We show that the premium for bearing idiosyncratic volatility varies inversely with the number of stocks included in the portfolios. This conclusion is robust within various multifactor models based on size, value, past performance, liquidity and total volatility and also holds within an ICAPM specification of the risk–return relationship. Our findings thus indicate that investors demand an additional return for bearing the idiosyncratic volatility of poorly-diversified portfolios.
Resumo:
Purpose – Price indices for commercial real estate markets are difficult to construct because assets are heterogeneous, they are spatially dispersed and they are infrequently traded. Appraisal-based indices are one response to these problems, but may understate volatility or fail to capture turning points in a timely manner. This paper estimates “transaction linked indices” for major European markets to see whether these offer a different perspective on market performance. The paper aims to discuss these issues. Design/methodology/approach – The assessed value method is used to construct the indices. This has been recently applied to commercial real estate datasets in the USA and UK. The underlying data comprise appraisals and sale prices for assets monitored by Investment Property Databank (IPD). The indices are compared to appraisal-based series for the countries concerned for Q4 2001 to Q4 2012. Findings – Transaction linked indices show stronger growth and sharper declines over the course of the cycle, but they do not notably lead their appraisal-based counterparts. They are typically two to four times more volatile. Research limitations/implications – Only country-level indicators can be constructed in many cases owing to low trading volumes in the period studied, and this same issue prevented sample selection bias from being analysed in depth. Originality/value – Discussion of the utility of transaction-based price indicators is extended to European commercial real estate markets. The indicators offer alternative estimates of real estate market volatility that may be useful in asset allocation and risk modelling, including in a regulatory context.