12 resultados para Variance Ratio Tests

em CentAUR: Central Archive University of Reading - UK


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, a sharp divergence of London Stock Exchange equity prices from dividends has been noted. In this paper, we examine whether this divergence can be explained by reference to the existence of a speculative bubble. Three different empirical methodologies are used: variance bounds tests, bubble specification tests, and cointegration tests based on both ex post and ex ante data. We find that, stock prices diverged significantly from their fundamental values during the late 1990's, and that this divergence has all the characteristics of a bubble.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gardner's popular model of perfect competition in the marketing sector is extended to a conjectural-variations oligopoly with endogenous entry. Revising Gardner's comparative statics on the "farm-retail price ratio," tests of hypotheses about food industry conduct are derived. Using data from a recent article by Wohlgenant, which employs Gardner's framework, tests are made of the validity of his maintained hypothesis-that the food industries are perfectly competitive. No evidence is found of departures from competition in the output markets of the food industries of eight commodity groups: (a) beef and veal, (b) pork, (c) poultry, (d) eggs, (e) dairy, (f) processed fruits and vegetables, (g) fresh fruit, and (h) fresh vegetables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speculative bubbles are generated when investors include the expectation of the future price in their information set. Under these conditions, the actual market price of the security, that is set according to demand and supply, will be a function of the future price and vice versa. In the presence of speculative bubbles, positive expected bubble returns will lead to increased demand and will thus force prices to diverge from their fundamental value. This paper investigates whether the prices of UK equity-traded property stocks over the past 15 years contain evidence of a speculative bubble. The analysis draws upon the methodologies adopted in various studies examining price bubbles in the general stock market. Fundamental values are generated using two models: the dividend discount and the Gordon growth. Variance bounds tests are then applied to test for bubbles in the UK property asset prices. Finally, cointegration analysis is conducted to provide further evidence on the presence of bubbles. Evidence of the existence of bubbles is found, although these appear to be transitory and concentrated in the mid-to-late 1990s.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ratio bias—according to which individuals prefer to bet on probabilities expressed as a ratio of large numbers to normatively equivalent or superior probabilities expressed as a ratio of small numbers—has recently gained momentum, with researchers especially in health economics emphasizing the policy importance of the phenomenon. Although the bias has been replicated several times, some doubts remain about its economic significance. Our two experiments show that the bias disappears once order effects are excluded, and once salient and dominant incentives are provided. This holds true for both choice and valuation tasks. Also, adding context to the decision problem does not alter this finding. No ratio bias could be found in between-subject tests either, which leads us to the conclusion that the policy relevance of the phenomenon is doubtful at best.