188 resultados para stock return predictability
Resumo:
This paper review the literature on the distribution of commercial real estate returns. There is growing evidence that the assumption of normality in returns is not safe. Distributions are found to be peaked, fat-tailed and, tentatively, skewed. There is some evidence of compound distributions and non-linearity. Public traded real estate assets (such as property company or REIT shares) behave in a fashion more similar to other common stocks. However, as in equity markets, it would be unwise to assume normality uncritically. Empirical evidence for UK real estate markets is obtained by applying distribution fitting routines to IPD Monthly Index data for the aggregate index and selected sub-sectors. It is clear that normality is rejected in most cases. It is often argued that observed differences in real estate returns are a measurement issue resulting from appraiser behaviour. However, unsmoothing the series does not assist in modelling returns. A large proportion of returns are close to zero. This would be characteristic of a thinly-traded market where new information arrives infrequently. Analysis of quarterly data suggests that, over longer trading periods, return distributions may conform more closely to those found in other asset markets. These results have implications for the formulation and implementation of a multi-asset portfolio allocation strategy.
Resumo:
This paper presents empirical evidence for a sample of 48 UK property company initial public offerings over the period 1986 to 1995. From which a number of conclusions can be drawn. First, property companies in general show positive average first day returns. Second, the average first day return by property trading companies is significantly higher than that for property investment companies
Resumo:
Successful quantitative precipitation forecasts under convectively unstable conditions depend on the ability of the model to capture the location, timing and intensity of convection. Ensemble forecasts of two mesoscale convective outbreaks over the UK are examined with a view to understanding the nature and extent of their predictability. In addition to a control forecast, twelve ensemble members are run for each case with the same boundary conditions but with perturbations added to the boundary layer. The intention is to introduce perturbations of appropriate magnitude and scale so that the large-scale behaviour of the simulations is not changed. In one case, convection is in statistical equilibrium with the large-scale flow. This places a constraint on the total precipitation, but the location and intensity of individual storms varied. In contrast, the other case was characterised by a large-scale capping inversion. As a result, the location of individual storms was fixed, but their intensities and the total precipitation varied strongly. The ensemble shows case-to-case variability in the nature of predictability of convection in a mesoscale model, and provides additional useful information for quantitative precipitation forecasting.
Resumo:
The Mystery of Edwin Drood has often been read as an Imperial text, just as Dickens's work has repeatedly been considered in relation to its construction of childhood. Despite this, 'the child' has either been avoided in criticism of Dickens's last novel, or has actively been read as absent. In this essay, I return the ‘repressed’ child to a reading of Drood, and through this disrupt appeals to a hard-impacted Imperial structure I understand to be made within criticism of it.
Resumo:
Leading time length is an important issue for modeling seasonal forecasts. In this study, a comparison of the interannual predictability of the Western North Pacific (WNP) summer monsoon between different leading months was performed by using one-, four-, and seven-month lead retrospective forecasts (hindcasts) of four coupled models from Ensembles-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) for the period of 1960-2005. It is found that the WNP summer anomalies, including lower-tropospheric circulation and precipitation anomalies, can be well predicted for all these leading months. The accuracy of the four-month lead prediction is only slightly weaker than that of the one-month lead prediction, although the skill decreases with the increase of leading months.
Resumo:
The mechanisms involved in Atlantic meridional overturning circulation (AMOC) decadal variability and predictability over the last 50 years are analysed in the IPSL–CM5A–LR model using historical and initialised simulations. The initialisation procedure only uses nudging towards sea surface temperature anomalies with a physically based restoring coefficient. When compared to two independent AMOC reconstructions, both the historical and nudged ensemble simulations exhibit skill at reproducing AMOC variations from 1977 onwards, and in particular two maxima occurring respectively around 1978 and 1997. We argue that one source of skill is related to the large Mount Agung volcanic eruption starting in 1963, which reset an internal 20-year variability cycle in the North Atlantic in the model. This cycle involves the East Greenland Current intensity, and advection of active tracers along the subpolar gyre, which leads to an AMOC maximum around 15 years after the Mount Agung eruption. The 1997 maximum occurs approximately 20 years after the former one. The nudged simulations better reproduce this second maximum than the historical simulations. This is due to the initialisation of a cooling of the convection sites in the 1980s under the effect of a persistent North Atlantic oscillation (NAO) positive phase, a feature not captured in the historical simulations. Hence we argue that the 20-year cycle excited by the 1963 Mount Agung eruption together with the NAO forcing both contributed to the 1990s AMOC maximum. These results support the existence of a 20-year cycle in the North Atlantic in the observations. Hindcasts following the CMIP5 protocol are launched from a nudged simulation every 5 years for the 1960–2005 period. They exhibit significant correlation skill score as compared to an independent reconstruction of the AMOC from 4-year lead-time average. This encouraging result is accompanied by increased correlation skills in reproducing the observed 2-m air temperature in the bordering regions of the North Atlantic as compared to non-initialized simulations. To a lesser extent, predicted precipitation tends to correlate with the nudged simulation in the tropical Atlantic. We argue that this skill is due to the initialisation and predictability of the AMOC in the present prediction system. The mechanisms evidenced here support the idea of volcanic eruptions as a pacemaker for internal variability of the AMOC. Together with the existence of a 20-year cycle in the North Atlantic they propose a novel and complementary explanation for the AMOC variations over the last 50 years.
Resumo:
1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
This paper discusses key contextual differences and similarities in a comparative study on brownfield regeneration in England and Japan. Over the last decade, the regeneration of large-scale ‘flagship’ projects has been a primary focus in England, and previous research has discussed policy issues and key barriers at these sites. However, further research is required to explore specific barriers associated with problematic ‘hardcore’ sites suffering from long-term dereliction due to site-specific obstacles such as contamination and fragmented ownership. In comparison with England, brownfield regeneration is a relatively new urban agenda in Japan. Japan has less experience in terms of promoting redevelopment of brownfield sites at national level and the specific issues of ‘hardcore’ sites have been under-researched. The paper reviews and highlights important issues in comparing the definitions, national policy frameworks and the current stock of brownfields.
Resumo:
The evaluation of the quality and usefulness of climate modeling systems is dependent upon an assessment of both the limited predictability of the climate system and the uncertainties stemming from model formulation. In this study a methodology is presented that is suited to assess the performance of a regional climate model (RCM), based on its ability to represent the natural interannual variability on monthly and seasonal timescales. The methodology involves carrying out multiyear ensemble simulations (to assess the predictability bounds within which the model can be evaluated against observations) and multiyear sensitivity experiments using different model formulations (to assess the model uncertainty). As an example application, experiments driven by assimilated lateral boundary conditions and sea surface temperatures from the ECMWF Reanalysis Project (ERA-15, 1979–1993) were conducted. While the ensemble experiment demonstrates that the predictability of the regional climate varies strongly between different seasons and regions, being weakest during the summer and over continental regions, important sensitivities of the modeling system to parameterization choices are uncovered. In particular, compensating mechanisms related to the long-term representation of the water cycle are revealed, in which summer dry and hot conditions at the surface, resulting from insufficient evaporation, can persist despite insufficient net solar radiation (a result of unrealistic cloud-radiative feedbacks).
Resumo:
Climate is an important control on biomass burning, but the sensitivity of fire to changes in temperature and moisture balance has not been quantified. We analyze sedimentary charcoal records to show that the changes in fire regime over the past 21,000 yrs are predictable from changes in regional climates. Analyses of paleo- fire data show that fire increases monotonically with changes in temperature and peaks at intermediate moisture levels, and that temperature is quantitatively the most important driver of changes in biomass burning over the past 21,000 yrs. Given that a similar relationship between climate drivers and fire emerges from analyses of the interannual variability in biomass burning shown by remote-sensing observations of month-by-month burnt area between 1996 and 2008, our results signal a serious cause for concern in the face of continuing global warming.
Resumo:
This paper studies the signalling effect of the consumption−wealth ratio (cay) on German stock returns via vector error correction models (VECMs). The effect of cay on U.S. stock returns has been recently confirmed by Lettau and Ludvigson with a two−stage method. In this paper, performance of the VECMs and the two−stage method are compared in both German and U.S. data. It is found that the VECMs are more suitable to study the effect of cay on stock returns than the two−stage method. Using the Conditional−Subset VECM, cay signals real stock returns and excess returns in both data sets significantly. The estimated coefficient on cay for stock returns turns out to be two times greater in U.S. data than in German data. When the two−stage method is used, cay has no significant effect on German stock returns. Besides, it is also found that cay signals German wealth growth and U.S. income growth significantly.
Resumo:
Ensembles of extended Atmospheric Model Intercomparison Project (AMIP) runs from the general circulation models of the National Centers for Environmental Prediction (formerly the National Meteorological Center) and the Max-Planck Institute (Hamburg, Germany) are used to estimate the potential predictability (PP) of an index of the Pacific–North America (PNA) mode of climate change. The PP of this pattern in “perfect” prediction experiments is 20%–25% of the index’s variance. The models, particularly that from MPI, capture virtually all of this variance in their hindcasts of the winter PNA for the period 1970–93. The high levels of internally generated model noise in the PNA simulations reconfirm the need for an ensemble averaging approach to climate prediction. This means that the forecasts ought to be expressed in a probabilistic manner. It is shown that the models’ skills are higher by about 50% during strong SST events in the tropical Pacific, so the probabilistic forecasts need to be conditional on the tropical SST. Taken together with earlier studies, the present results suggest that the original set of AMIP integrations (single 10-yr runs) is not adequate to reliably test the participating models’ simulations of interannual climate variability in the midlatitudes.
Resumo:
The atmospheric response to the evolution of the global sea surface temperatures from 1979 to 1992 is studied using the Max-Planck-Institut 19 level atmospheric general circulation model, ECHAM3 at T 42 resolution. Five separate 14-year integrations are performed and results are presented for each individual realization and for the ensemble-averaged response. The results are compared to a 30-year control integration using a climate monthly mean state of the sea surface temperatures and to analysis data. It is found that the ECHAM3 model, by and large, does reproduce the observed response pattern to El Nin˜o and La Nin˜a. During the El Nin˜ o events, the subtropical jet streams in both hemispheres are intensified and displaced equatorward, and there is a tendency towards weak upper easterlies over the equator. The Southern Oscillation is a very stable feature of the integrations and is accurately reproduced in all experiments. The inter-annual variability at middle- and high-latitudes, on the other hand, is strongly dominated by chaotic dynamics, and the tropical SST forcing only modulates the atmospheric circulation. The potential predictability of the model is investigated for six different regions. Signal to noise ratio is large in most parts of the tropical belt, of medium strength in the western hemisphere and generally small over the European area. The ENSO signal is most pronounced during the boreal spring. A particularly strong signal in the precipitation field in the extratropics during spring can be found over the southern United States. Western Canada is normally warmer during the warm ENSO phase, while northern Europe is warmer than normal during the ENSO cold phase. The reason is advection of warm air due to a more intense Pacific low than normal during the warm ENSO phase and a more intense Icelandic low than normal during the cold ENSO phase, respectively.