27 resultados para Location-Allocation Models
em CentAUR: Central Archive University of Reading - UK
Resumo:
Depreciation is a key element of understanding the returns from and price of commercial real estate. Understanding its impact is important for asset allocation models and asset management decisions. It is a key input into well-constructed pricing models and its impact on indices of commercial real estate prices needs to be recognised. There have been a number of previous studies of the impact of depreciation on real estate, particularly in the UK. Law (2004) analysed all of these studies and found that the seemingly consistent results were an illusion as they all used a variety of measurement methods and data. In addition, none of these studies examined impact on total returns; they examined either rental value depreciation alone or rental and capital value depreciation. This study seeks to rectify this omission, adopting the best practice measurement framework set out by Law (2004). Using individual property data from the UK Investment Property Databank for the 10-year period between 1994 and 2003, rental and capital depreciation, capital expenditure rates, and total return series for the data sample and for a benchmark are calculated for 10 market segments. The results are complicated by the period of analysis which started in the aftermath of the major UK real estate recession of the early 1990s, but they give important insights into the impact of depreciation in different segments of the UK real estate investment market.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD UK database over the period 1981 to 2003 than normally distributed risk models. This finding mirrors results in the US and Australia using identical methodology. Real estate investment risk is heteroskedastic, but the characteristic exponent of the investment risk function is constant across time – yet it may vary by property type. Asset diversification is far less effective at reducing the impact of non‐systematic investment risk on real estate portfolios than in the case of assets with normally distributed investment risk. The results, therefore, indicate that multi‐risk factor portfolio allocation models based on measures of investment codependence from finite‐variance statistics are ineffective in the real estate context
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD database over the period 1981 to 2003 than Normally distributed risk models, which mirrors results in the U.S. and Australia using identical methodology. Real estate investment risk is heteroscedastic, but the Characteristic Exponent of the investment risk function is constant across time yet may vary by property type. Asset diversification is far less effective at reducing the impact of non-systematic investment risk on real estate portfolios than in the case of assets with Normally distributed investment risk. Multi-risk factor portfolio allocation models based on measures of investment codependence from finite-variance statistics are ineffectual in the real estate context.
Resumo:
Liquidity is a fundamentally important facet of investments, but there is no single measure that quantifies it perfectly. Instead, a range of measures are necessary to capture different dimensions of liquidity such as the breadth and depth of markets, the costs of transacting, the speed with which transactions can occur and the resilience of prices to trading activity. This article considers how different dimensions have been measured in financial markets and for various forms of real estate investment. The purpose of this exercise is to establish the range of liquidity measures that could be used for real estate investments before considering which measures and questions have been investigated so far. Most measures reviewed here are applicable to public real estate, but not all can be applied to private real estate assets or funds. Use of a broader range of liquidity measures could help real estate researchers tackle issues such as quantification of illiquidity premiums for the real estate asset class or different types of real estate, and how liquidity differences might be incorporated into portfolio allocation models.
Resumo:
We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 2003–2007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.
Resumo:
The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In April–July 2008, intensive measurements were made of atmospheric composition and chemistry in Sabah, Malaysia, as part of the "Oxidant and particle photochemical processes above a South-East Asian tropical rainforest" (OP3) project. Fluxes and concentrations of trace gases and particles were made from and above the rainforest canopy at the Bukit Atur Global Atmosphere Watch station and at the nearby Sabahmas oil palm plantation, using both ground-based and airborne measurements. Here, the measurement and modelling strategies used, the characteristics of the sites and an overview of data obtained are described. Composition measurements show that the rainforest site was not significantly impacted by anthropogenic pollution, and this is confirmed by satellite retrievals of NO2 and HCHO. The dominant modulators of atmospheric chemistry at the rainforest site were therefore emissions of BVOCs and soil emissions of reactive nitrogen oxides. At the observed BVOC:NOx volume mixing ratio (~100 pptv/pptv), current chemical models suggest that daytime maximum OH concentrations should be ca. 105 radicals cm−3, but observed OH concentrations were an order of magnitude greater than this. We confirm, therefore, previous measurements that suggest that an unexplained source of OH must exist above tropical rainforest and we continue to interrogate the data to find explanations for this.
Resumo:
Thirty‐three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation phase and duration of above‐freezing air temperatures are shown to be major influences on divergence and convergence of modeled estimates of the subcanopy snowpack. When models are considered collectively at all locations, comparisons with observations show that it is harder to model SWE at forested sites than open sites. There is no universal “best” model for all sites or locations, but comparison of the consistency of individual model performances relative to one another at different sites shows that there is less consistency at forest sites than open sites, and even less consistency between forest and open sites in the same year. A good performance by a model at a forest site is therefore unlikely to mean a good model performance by the same model at an open site (and vice versa). Calibration of models at forest sites provides lower errors than uncalibrated models at three out of four locations. However, benefits of calibration do not translate to subsequent years, and benefits gained by models calibrated for forest snow processes are not translated to open conditions.
Resumo:
With the current concern over climate change, descriptions of how rainfall patterns are changing over time can be useful. Observations of daily rainfall data over the last few decades provide information on these trends. Generalized linear models are typically used to model patterns in the occurrence and intensity of rainfall. These models describe rainfall patterns for an average year but are more limited when describing long-term trends, particularly when these are potentially non-linear. Generalized additive models (GAMS) provide a framework for modelling non-linear relationships by fitting smooth functions to the data. This paper describes how GAMS can extend the flexibility of models to describe seasonal patterns and long-term trends in the occurrence and intensity of daily rainfall using data from Mauritius from 1962 to 2001. Smoothed estimates from the models provide useful graphical descriptions of changing rainfall patterns over the last 40 years at this location. GAMS are particularly helpful when exploring non-linear relationships in the data. Care is needed to ensure the choice of smooth functions is appropriate for the data and modelling objectives. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A Bayesian approach to analysing data from family-based association studies is developed. This permits direct assessment of the range of possible values of model parameters, such as the recombination frequency and allelic associations, in the light of the data. In addition, sophisticated comparisons of different models may be handled easily, even when such models are not nested. The methodology is developed in such a way as to allow separate inferences to be made about linkage and association by including theta, the recombination fraction between the marker and disease susceptibility locus under study, explicitly in the model. The method is illustrated by application to a previously published data set. The data analysis raises some interesting issues, notably with regard to the weight of evidence necessary to convince us of linkage between a candidate locus and disease.
Resumo:
A novel Swarm Intelligence method for best-fit search, Stochastic Diffusion Search, is presented capable of rapid location of the optimal solution in the search space. Population based search mechanisms employed by Swarm Intelligence methods can suffer lack of convergence resulting in ill defined stopping criteria and loss of the best solution. Conversely, as a result of its resource allocation mechanism, the solutions SDS discovers enjoy excellent stability.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
We analyze the publicly released outputs of the simulations performed by climate models (CMs) in preindustrial (PI) and Special Report on Emissions Scenarios A1B (SRESA1B) conditions. In the PI simulations, most CMs feature biases of the order of 1 W m −2 for the net global and the net atmospheric, oceanic, and land energy balances. This does not result from transient effects but depends on the imperfect closure of the energy cycle in the fluid components and on inconsistencies over land. Thus, the planetary emission temperature is underestimated, which may explain the CMs' cold bias. In the PI scenario, CMs agree on the meridional atmospheric enthalpy transport's peak location (around 40°N/S), while discrepancies of ∼20% exist on the intensity. Disagreements on the oceanic transport peaks' location and intensity amount to ∼10° and ∼50%, respectively. In the SRESA1B runs, the atmospheric transport's peak shifts poleward, and its intensity increases up to ∼10% in both hemispheres. In most CMs, the Northern Hemispheric oceanic transport decreases, and the peaks shift equatorward in both hemispheres. The Bjerknes compensation mechanism is active both on climatological and interannual time scales. The total meridional transport peaks around 35° in both hemispheres and scenarios, whereas disagreements on the intensity reach ∼20%. With increased CO 2 concentration, the total transport increases up to ∼10%, thus contributing to polar amplification of global warming. Advances are needed for achieving a self-consistent representation of climate as a nonequilibrium thermodynamical system. This is crucial for improving the CMs' skill in representing past and future climate changes.