942 resultados para equilibrium asset pricing models with latent variables
Resumo:
Depreciation is a key element of understanding the returns from and price of commercial real estate. Understanding its impact is important for asset allocation models and asset management decisions. It is a key input into well-constructed pricing models and its impact on indices of commercial real estate prices needs to be recognised. There have been a number of previous studies of the impact of depreciation on real estate, particularly in the UK. Law (2004) analysed all of these studies and found that the seemingly consistent results were an illusion as they all used a variety of measurement methods and data. In addition, none of these studies examined impact on total returns; they examined either rental value depreciation alone or rental and capital value depreciation. This study seeks to rectify this omission, adopting the best practice measurement framework set out by Law (2004). Using individual property data from the UK Investment Property Databank for the 10-year period between 1994 and 2003, rental and capital depreciation, capital expenditure rates, and total return series for the data sample and for a benchmark are calculated for 10 market segments. The results are complicated by the period of analysis which started in the aftermath of the major UK real estate recession of the early 1990s, but they give important insights into the impact of depreciation in different segments of the UK real estate investment market.
Resumo:
The increased frequency in reporting UK property performance figures, coupled with the acceptance of the IPD database as the market standard, has enabled property to be analysed on a comparable level with other more frequently traded assets. The most widely utilised theory for pricing financial assets, the Capital Asset Pricing Model (CAPM), gives market (systematic) risk, beta, centre stage. This paper seeks to measure the level of systematic risk (beta) across various property types, market conditions and investment holding periods. This paper extends the authors’ previous work on investment holding periods and how excess returns (alpha) relate to those holding periods. We draw on the uniquely constructed IPD/Gerald Eve transactions database, containing over 20,000 properties over the period 1983-2005. This research allows us to confirm our initial findings that properties held over longer periods perform in line with overall market performance. One implication of this is that over the long-term performance may be no different from an index tracking approach.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD UK database over the period 1981 to 2003 than normally distributed risk models. This finding mirrors results in the US and Australia using identical methodology. Real estate investment risk is heteroskedastic, but the characteristic exponent of the investment risk function is constant across time – yet it may vary by property type. Asset diversification is far less effective at reducing the impact of non‐systematic investment risk on real estate portfolios than in the case of assets with normally distributed investment risk. The results, therefore, indicate that multi‐risk factor portfolio allocation models based on measures of investment codependence from finite‐variance statistics are ineffective in the real estate context
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
The application of real options theory to commercial real estate has developed rapidly during the last 15 Years. In particular, several pricing models have been applied to value real options embedded in development projects. In this study we use a case study of a mixed use development scheme and identify the major implied and explicit real options available to the developer. We offer the perspective of a real market application by exploring different binomial models and the associated methods of estimating the crucial parameter of volatility. We include simple binomial lattices, quadranomial lattices and demonstrate the sensitivity of the results to the choice of inputs and method.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD database over the period 1981 to 2003 than Normally distributed risk models, which mirrors results in the U.S. and Australia using identical methodology. Real estate investment risk is heteroscedastic, but the Characteristic Exponent of the investment risk function is constant across time yet may vary by property type. Asset diversification is far less effective at reducing the impact of non-systematic investment risk on real estate portfolios than in the case of assets with Normally distributed investment risk. Multi-risk factor portfolio allocation models based on measures of investment codependence from finite-variance statistics are ineffectual in the real estate context.
Resumo:
This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.
Resumo:
An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km. The 15-year integrations were forced from reanalyses and observed sea surface temperature and sea ice (global model from sea surface only). The observational reference is based on 6400 rain gauge records (10–50 stations per grid box). Evaluation statistics encompass mean precipitation, wet-day frequency, precipitation intensity, and quantiles of the frequency distribution. For mean precipitation, the models reproduce the characteristics of the annual cycle and the spatial distribution. The domain mean bias varies between −23% and +3% in winter and between −27% and −5% in summer. Larger errors are found for other statistics. In summer, all models underestimate precipitation intensity (by 16–42%) and there is a too low frequency of heavy events. This bias reflects too dry summer mean conditions in three of the models, while it is partly compensated by too many low-intensity events in the other two models. Similar intermodel differences are found for other European subregions. Interestingly, the model errors are very similar between the two models with the same dynamical core (but different parameterizations) and they differ considerably between the two models with similar parameterizations (but different dynamics). Despite considerable biases, the models reproduce prominent mesoscale features of heavy precipitation, which is a promising result for their use in climate change downscaling over complex topography.
Resumo:
An assessment of the fifth Coupled Models Intercomparison Project (CMIP5) models’ simulation of the near-surface westerly wind jet position and strength over the Atlantic, Indian and Pacific sectors of the Southern Ocean is presented. Compared with reanalysis climatologies there is an equatorward bias of 3.7° (inter-model standard deviation of ± 2.2°) in the ensemble mean position of the zonal mean jet. The ensemble mean strength is biased slightly too weak, with the largest biases over the Pacific sector (-1.6±1.1 m/s, 27 -22%). An analysis of atmosphere-only (AMIP) experiments indicates that 41% of the zonal mean position bias comes from coupling of the ocean/ice models to the atmosphere. The response to future emissions scenarios (RCP4.5 and RCP8.5) is characterized by two phases: (i) the period of most rapid ozone recovery (2000-2049) during which there is insignificant change in summer; and (ii) the period 2050-2098 during which RCP4.5 simulations show no significant change but RCP8.5 simulations show poleward shifts (0.30, 0.19 and 0.28°/decade over the Atlantic, Indian and Pacific sectors respectively), and increases in strength (0.06, 0.08 and 0.15 m/s/decade respectively). The models with larger equatorward position biases generally show larger poleward shifts (i.e. state dependence). This inter-model relationship is strongest over the Pacific sector (r=-0.89) and insignificant over the Atlantic sector (r=-0.50). However, an assessment of jet structure shows that over the Atlantic sector jet shift is significantly correlated with jet width whereas over the Pacific sector the distance between the sub-polar and sub-tropical westerly jets appears to be more important.
Resumo:
Analysis of 20th century simulations of the High resolution Global Environment Model (HiGEM) and the Third Coupled Model Intercomparison Project (CMIP3) models shows that most have a cold sea-surface temperature (SST) bias in the northern Arabian Sea during boreal winter. The association between Arabian Sea SST and the South Asian monsoon has been widely studied in observations and models, with winter cold biases known to be detrimental to rainfall simulation during the subsequent monsoon in coupled general circulation models (GCMs). However, the causes of these SST biases are not well understood. Indeed this is one of the first papers to address causes of the cold biases. The models show anomalously strong north-easterly winter monsoon winds and cold air temperatures in north-west India, Pakistan and beyond. This leads to the anomalous advection of cold, dry air over the Arabian Sea. The cold land region is also associated with an anomalously strong meridional surface temperature gradient during winter, contributing to the enhanced low-level convergence and excessive precipitation over the western equatorial Indian Ocean seen in many models.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.
Resumo:
Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.
Resumo:
Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
We describe the main differences in simulations of stratospheric climate and variability by models within the fifth Coupled Model Intercomparison Project (CMIP5) that have a model top above the stratopause and relatively fine stratospheric vertical resolution (high-top), and those that have a model top below the stratopause (low-top). Although the simulation of mean stratospheric climate by the two model ensembles is similar, the low-top model ensemble has very weak stratospheric variability on daily and interannual time scales. The frequency of major sudden stratospheric warming events is strongly underestimated by the low-top models with less than half the frequency of events observed in the reanalysis data and high-top models. The lack of stratospheric variability in the low-top models affects their stratosphere-troposphere coupling, resulting in short-lived anomalies in the Northern Annular Mode, which do not produce long-lasting tropospheric impacts, as seen in observations. The lack of stratospheric variability, however, does not appear to have any impact on the ability of the low-top models to reproduce past stratospheric temperature trends. We find little improvement in the simulation of decadal variability for the high-top models compared to the low-top, which is likely related to the fact that neither ensemble produces a realistic dynamical response to volcanic eruptions.