141 resultados para Consistent Conditional Correlation
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
Practical applications of portfolio optimisation tend to proceed on a “top down” basis where funds are allocated first at asset class level (between, say, bonds, cash, equities and real estate) and then, progressively, at sub-class level (within property to sectors, office, retail, industrial for example). While there are organisational benefits from such an approach, it can potentially lead to sub-optimal allocations when compared to a “global” or “side-by-side” optimisation. This will occur where there are correlations between sub-classes across the asset divide that are masked in aggregation – between, for instance, City offices and the performance of financial services stocks. This paper explores such sub-class linkages using UK monthly stock and property data. Exploratory analysis using clustering procedures and factor analysis suggests that property performance and equity performance are distinctive: there is little persuasive evidence of contemporaneous or lagged sub-class linkages. Formal tests of the equivalence of optimised portfolios using top-down and global approaches failed to demonstrate significant differences, whether or not allocations were constrained. While the results may be a function of measurement of market returns, it is those returns that are used to assess fund performance. Accordingly, the treatment of real estate as a distinct asset class with diversification potential seems justified.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delays.
Resumo:
Atmospheric aerosol acts to both reduce the background concentration of natural cluster ions, and to attenuate optical propagation. Hence, the presence of aerosol has two consequences, the reduction of the air’s electrical conductivity and the visual range. Ion-aerosol theory and Koschmieder’s visibility theory are combined here to derive the related non-linear variation of the atmospheric electric potential gradient with visual range. A substantial sensitivity is found under poor visual range conditions, but, for good visual range conditions the sensitivity diminishes and little influence of local aerosol on the fair weather potential gradient occurs. This allows visual range measurements, made simply and routinely at many meteorological sites, to provide inference about the local air’s electrical properties.
Resumo:
The issue of whether Real Estate Investment Trusts (REITs) should pursue a focused or diversified investment strategy remains an ongoing debate within both the academic and industry communities. This article considers the relationship between REITs focused on different property sectors in a Generalized Autoregressive Conditional Heteroscedasticity-Dynamic Control Correlation (GARCH-DCC) framework. The daily conditional correlations reveal that since 1990 there has been a marked upward trend in the coefficients between US REIT sub-sectors. The findings imply that REITs are behaving in a far more homogeneous manner than in the past. Furthermore, the argument that REITs should be focused in order that investors can make the diversification decision is reduced.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
In a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.
Resumo:
Background. The anaerobic spirochaete Brachyspira pilosicoli causes enteric disease in avian, porcine and human hosts, amongst others. To date, the only available genome sequence of B. pilosicoli is that of strain 95/1000, a porcine isolate. In the first intra-species genome comparison within the Brachyspira genus, we report the whole genome sequence of B. pilosicoli B2904, an avian isolate, the incomplete genome sequence of B. pilosicoli WesB, a human isolate, and the comparisons with B. pilosicoli 95/1000. We also draw on incomplete genome sequences from three other Brachyspira species. Finally we report the first application of the high-throughput Biolog phenotype screening tool on the B. pilosicoli strains for detailed comparisons between genotype and phenotype. Results. Feature and sequence genome comparisons revealed a high degree of similarity between the three B. pilosicoli strains, although the genomes of B2904 and WesB were larger than that of 95/1000 (~2,765, 2.890 and 2.596 Mb, respectively). Genome rearrangements were observed which correlated largely with the positions of mobile genetic elements. Through comparison of the B2904 and WesB genomes with the 95/1000 genome, features that we propose are non-essential due to their absence from 95/1000 include a peptidase, glycine reductase complex components and transposases. Novel bacteriophages were detected in the newly-sequenced genomes, which appeared to have involvement in intra- and inter-species horizontal gene transfer. Phenotypic differences predicted from genome analysis, such as the lack of genes for glucuronate catabolism in 95/1000, were confirmed by phenotyping. Conclusions. The availability of multiple B. pilosicoli genome sequences has allowed us to demonstrate the substantial genomic variation that exists between these strains, and provides an insight into genetic events that are shaping the species. In addition, phenotype screening allowed determination of how genotypic differences translated to phenotype. Further application of such comparisons will improve understanding of the metabolic capabilities of Brachyspira species.
Resumo:
Firms' conjectures are consistent if their ex post behavior rationalizes their ex ante beliefs. Admissible conjectures are those that satisfy the necessary conditions for consistency. Competition is inadmissible unless aggregate output is stationary. Relaxing this restriction, admissibility eliminates Cournot behavior and constrains conduct to be collusive
Resumo:
Consistent conjectures are derived in an oligopoly model with homogeneous products and identical firms. The exercise uncovers two important findings. Absent entry, the monopolistic conjecture, is the unique consistent conjecture. With endogenous entry, no consistent conjecture exists. These results provide foundations for deriving consistentc omparatives taticsf or the food industries
Resumo:
This paper demonstrates that, in situations in which a cumulative externality exists, the basic nature and extent of resource misallocation may be substantially less than we imagine. This conclusion stems from deriving consistent conjectures in a unified framework in which congestion is present. Experiments support the conclusion that, when numbers of agents are small, when there is little heterogeneity among them, and when they have the opportunity to observe each other during repeated experiment, the market allocation may be efficient