873 resultados para Volatility premium
Resumo:
This paper models the determinants of integration in the context of global real estate security markets. Using both local and U.S. Dollar denominated returns, we model conditional correlations across listed real estate sectors and also with the global stock market. The empirical results find that financial factors, such as the relationship with the respective equity market, volatility, the relative size of the real estate sector and trading turnover all play an important role in the degree of integration present. Furthermore, the results highlight the importance of macro-economic variables in the degree of integration present. All four of the macro-economic variables modeled provide at least one significant result across the specifications estimated. Factors such as financial and trade openness, monetary independence and the stability of a country’s currency all contribute to the degree of integration reported.
Resumo:
Area-wide development viability appraisals are undertaken to determine the economic feasibility of policy targets in relation to planning obligations. Essentially, development viability appraisals consist of a series of residual valuations of hypothetical development sites across a local authority area at a particular point in time. The valuations incorporate the estimated financial implications of the proposed level of planning obligations. To determine viability the output land values are benchmarked against threshold land value and therefore the basis on which this threshold is established and the level at which it is set is critical to development viability appraisal at the policy-setting (area-wide) level. Essentially it is an estimate of the value at which a landowner would be prepared to sell. If the estimated site values are higher than the threshold land value the policy target is considered viable. This paper investigates the effectiveness of existing methods of determining threshold land value. They will be tested against the relationship between development value and costs. Modelling reveals that threshold land value that is not related to shifts in development value renders marginal sites unviable and fails to collect proportionate planning obligations from high value/low cost sites. Testing the model against national average house prices and build costs reveals the high degree of volatility in residual land values over time and underlines the importance of making threshold land value relative to the main driver of this volatility, namely development value.
Resumo:
Processing of highly perishable non-storable crops, such as tomato, is typically promoted for two reasons: as a way of absorbing excess supply, particularly during gluts that result from predominantly rainfed cultivation; and to enhance the value chain through a value-added process. For Ghana, improving domestic tomato processing would also reduce the country’s dependence on imported tomato paste and so improve foreign exchange reserves, as well as provide employment opportunities and development opportunities in what are poor rural areas of the country. Many reports simply repeat the mantra that processing offers a way of buying up the glut. Yet the reality is that the “tomato gluts,” an annual feature of the local press, occur only for a few weeks of the year, and are almost always a result of large volumes of rainfed local varieties unsuitable for processing entering the fresh market at the same time, not the improved varieties that could be used by the processors. For most of the year, the price of tomatoes suitable for processing is above the breakeven price for tomato processors, given the competition from imports. Improved varieties (such as Pectomech) that are suitable for processing are also preferred by consumers and achieve a premium price over the local varieties.
Resumo:
This paper investigates whether the intrinsic energy efficiency rating of an office building has a significant impact on its rental value. A sample of 817 transactions for offices with Energy Performance Certificates (EPCs) in the UK is used to assess whether a pricing differential can be identified, depending on the energy rating. While previous analyses of this topic have typically relied on appraisal-based and/or asking rent data, the dataset used in this research contains actual contract rents as well as information on lease terms. The results indicate a significant rental premium for energy-efficient buildings. However, it is found that this premium appears to be mainly driven by the youngest cohort of state-of-the-art energy-efficient buildings. The results also show that tenants of more energy-efficient buildings tend to pay a lower service charge, but this link appears to be rather weak and limited to newer buildings. Hence, it is argued that the information contained in the EPC is still not fully taken into account in the UK commercial property market with the possible exception of both the highest and the lowest EPC ratings.
Resumo:
The performance of real estate investment markets is difficult to monitor because the constituent assets are heterogeneous, are traded infrequently and do not trade through a central exchange in which prices can be observed. To address this, appraisal based indices have been developed that use the records of owners for whom buildings are regularly re-valued. These indices provide a practical solution to the measurement problem, but have been criticised for understating volatility and not capturing market turning points in a timely manner. This paper evaluates alternative ‘Transaction Linked Indices’ that are estimated using an extension of the hedonic method for index construction and with Investment Property Databank data. The two types of indices are compared over Q4 2001 to Q4 2012 in order to examine whether movements in these indices are consistent. The Transaction Linked Indices show stronger growth and sharper declines than their appraisal based counterparts over the course of the cycle in different European markets and they are typically two to four times more volatile. However, they have some limitations; for instance, only country level indicators can be published in many cases owing to low trading volumes in the period studied.
Resumo:
In Britain, substantial cuts in police budgets alongside controversial handling of incidents such as politically sensitive enquiries, public disorder and relations with the media have recently triggered much debate about public knowledge and trust in the police. To date, however, little academic research has investigated how knowledge of police performance impacts citizens’ trust. We address this long-standing lacuna by exploring citizens’ trust before and after exposure to real performance data in the context of a British police force. The results reveal that being informed of performance data affects citizens’ trust significantly. Furthermore, direction and degree of change in trust are related to variations across the different elements of the reported performance criteria. Interestingly, the volatility of citizens’ trust is related to initial performance perceptions (such that citizens with low initial perceptions of police performance react more significantly to evidence of both good and bad performance than citizens with high initial perceptions), and citizens’ intentions to support the police do not always correlate with their cognitive and affective trust towards the police. In discussing our findings, we explore the implications of how being transparent with performance data can both hinder and be helpful in developing citizens’ trust towards a public organisation such as the police. From our study, we pose a number of ethical challenges that practitioners face when deciding what data to highlight, to whom, and for what purpose.
Resumo:
This paper examines the impact of changes in the composition of real estate stock indices, considering companies both joining and leaving the indices. Stocks that are newly included not only see a short-term increase in their share price, but trading volumes increase in a permanent fashion following the event. This highlights the importance of indices in not only a benchmarking context but also in enhancing investor awareness and aiding liquidity. By contrast, as anticipated, the share prices of firms removed from indices fall around the time of the index change. The fact that the changes in share prices, either upwards for index inclusions or downwards for deletions, are generally not reversed, would indicate that the movements are not purely due to price pressure, but rather are more consistent with the information content hypothesis. There is no evidence, however, that index changes significantly affect the volatility of price changes or their operating performances as measured by their earnings per share.
Resumo:
The external environment is characterized by periods of relative stability interspersed with periods of extreme change, implying that high performing firms must practice exploration and exploitation in order to survive and thrive. In this paper, we posit that R&D expenditure volatility indicates the presence of proactive R&D management, and is evidence of a firm moving from exploitation to exploration over time. This is consistent with a punctuated equilibrium model of R&D investment where shocks are induced by reactions to external turbulence. Using an unbalanced panel of almost 11,000 firm-years from 1997 to 2006, we show that greater fluctuations in the firm's R&D expenditure over time are associated with higher firm growth. Developing a contextual view of the relationship between R&D expenditure volatility and firm growth, we find that this relationship is weaker among firms with higher levels of corporate diversification and negative among smaller firms and those in slow clockspeed industries.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
We analyse by simulation the impact of model-selection strategies (sometimes called pre-testing) on forecast performance in both constant-and non-constant-parameter processes. Restricted, unrestricted and selected models are compared when either of the first two might generate the data. We find little evidence that strategies such as general-to-specific induce significant over-fitting, or thereby cause forecast-failure rejection rates to greatly exceed nominal sizes. Parameter non-constancies put a premium on correct specification, but in general, model-selection effects appear to be relatively small, and progressive research is able to detect the mis-specifications.
Resumo:
A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.
Resumo:
This paper demonstrates that the use of GARCH-type models for the calculation of minimum capital risk requirements (MCRRs) may lead to the production of inaccurate and therefore inefficient capital requirements. We show that this inaccuracy stems from the fact that GARCH models typically overstate the degree of persistence in return volatility. A simple modification to the model is found to improve the accuracy of MCRR estimates in both back- and out-of-sample tests. Given that internal risk management models are currently in widespread usage in some parts of the world (most notably the USA), and will soon be permitted for EC banks and investment firms, we believe that our paper should serve as a valuable caution to risk management practitioners who are using, or intend to use this popular class of models.
Resumo:
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.
Resumo:
A number of recent papers have employed the BDS test as a general test for mis-specification for linear and nonlinear models. We show that for a particular class of conditionally heteroscedastic models, the BDS test is unable to detect a common mis-specification. Our results also demonstrate that specific rather than portmanteau diagnostics are required to detect neglected asymmetry in volatility. However for both classes of tests reasonable power is only obtained using very large sample sizes.
Resumo:
This paper contributes to the debate on the effects of the financialization of commodity futures markets by studying the conditional volatility of long–short commodity portfolios and their conditional correlations with traditional assets (stocks and bonds). Using several groups of trading strategies that hedge fund managers are known to implement, we show that long–short speculators do not cause changes in the volatilities of the portfolios they hold or changes in the conditional correlations between these portfolios and traditional assets. Thus calls for increased regulation of commodity money managers are, at this stage, premature. Additionally, long–short speculators can take comfort in knowing that their trades do not alter the risk and diversification properties of their portfolios.