974 resultados para variance-ratio tests


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The operational sex ratio has long been considered an important constraint on the structure of mating systems. The effects of an experimentally manipulated sex ratio on mating behavior and selection were investigated in a polygynous species, Gryllus pennsylvanicus, where the potential exists for spatial/temporal fluctuations in sex ratio of field populations. Four different sex ratios (males: females, 5:0, 5:2, 5:5, 5:10) were investigated. Observations were conducted in late summer over two field seasons, from 2400 h , to 1000 h EST. Several male characters thought to be associated with male reproduc.tive success were studied: calling duration, searching distance, weight, fighting behavior, courtship frequency, and mating success. Variance in male mating success was used as the indicator for the opportunity for sexual selection. Total selection was estimated as the univariate regression coefficient between relative fitness and the character of interest, while direct selection was estimated as standardized partial regression coefficients generated from a multiple regression of relative fitness on each character. The opportunity for sexual selection was highest at 5:2 and lowest at 5:10. The frequency of fighting behavior was highest at 5:2 and 5:5. Fighting ability (% wins) was determined to be an important correlate of male body weight. Direct selection for increased male body weight was detected at 5:2, while total selection for body weight was seen at 5:5. Selection on male body weight was not detected at 5: 10. Calling duration decreased as sex ratio became more female-biased. Total and direct selection were detected for increased calling at 5:2, only total selection for calling was seen at 5:5, whereas direct selection against calling was detected at 5: 10. Searching distance also decreased as sex ratio became more female-biased, however no form of selection was detected for searching at any of the sex ratios. Data are discussed in terms of sexual selection on male reproductive tactics, the mating system and maintenance of genetic variation in male reproductive behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce Texte Presente Plusieurs Resultats Exacts Sur les Seconds Moments des Autocorrelations Echantillonnales, Pour des Series Gaussiennes Ou Non-Gaussiennes. Nous Donnons D'abord des Formules Generales Pour la Moyenne, la Variance et les Covariances des Autocorrelations Echantillonnales, Dans le Cas Ou les Variables de la Serie Sont Interchangeables. Nous Deduisons de Celles-Ci des Bornes Pour les Variances et les Covariances des Autocorrelations Echantillonnales. Ces Bornes Sont Utilisees Pour Obtenir des Limites Exactes Sur les Points Critiques Lorsqu'on Teste le Caractere Aleatoire D'une Serie Chronologique, Sans Qu'aucune Hypothese Soit Necessaire Sur la Forme de la Distribution Sous-Jacente. Nous Donnons des Formules Exactes et Explicites Pour les Variances et Covariances des Autocorrelations Dans le Cas Ou la Serie Est un Bruit Blanc Gaussien. Nous Montrons Que Ces Resultats Sont Aussi Valides Lorsque la Distribution de la Serie Est Spheriquement Symetrique. Nous Presentons les Resultats D'une Simulation Qui Indiquent Clairement Qu'on Approxime Beaucoup Mieux la Distribution des Autocorrelations Echantillonnales En Normalisant Celles-Ci Avec la Moyenne et la Variance Exactes et En Utilisant la Loi N(0,1) Asymptotique, Plutot Qu'en Employant les Seconds Moments Approximatifs Couramment En Usage. Nous Etudions Aussi les Variances et Covariances Exactes D'autocorrelations Basees Sur les Rangs des Observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Généralement, dans les situations d’hypothèses multiples on cherche à rejeter toutes les hypothèses ou bien une seule d’entre d’elles. Depuis quelques temps on voit apparaître le besoin de répondre à la question : « Peut-on rejeter au moins r hypothèses ? ». Toutefois, les outils statisques pour répondre à cette question sont rares dans la littérature. Nous avons donc entrepris de développer les formules générales de puissance pour les procédures les plus utilisées, soit celles de Bonferroni, de Hochberg et de Holm. Nous avons développé un package R pour le calcul de la taille échantilonnalle pour les tests à hypothèses multiples (multiple endpoints), où l’on désire qu’au moins r des m hypothèses soient significatives. Nous nous limitons au cas où toutes les variables sont continues et nous présentons quatre situations différentes qui dépendent de la structure de la matrice de variance-covariance des données.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ratio bias—according to which individuals prefer to bet on probabilities expressed as a ratio of large numbers to normatively equivalent or superior probabilities expressed as a ratio of small numbers—has recently gained momentum, with researchers especially in health economics emphasizing the policy importance of the phenomenon. Although the bias has been replicated several times, some doubts remain about its economic significance. Our two experiments show that the bias disappears once order effects are excluded, and once salient and dominant incentives are provided. This holds true for both choice and valuation tasks. Also, adding context to the decision problem does not alter this finding. No ratio bias could be found in between-subject tests either, which leads us to the conclusion that the policy relevance of the phenomenon is doubtful at best.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study determined the sensory shelf life of a commercial brand of chocolate and carrot cupcakes, aiming at increasing the current 120 days of shelf life to 180. Appearance, texture, flavor and overall quality of cakes stored at six different storage times were evaluated by 102 consumers. The data were analyzed by analysis of variance and linear regression. For both flavors, the texture presented a greater loss in acceptance during the storage period, showing an acceptance mean close to indifference on the hedonic scale at 120 days. Nevertheless, appearance, flavor and overall quality stayed acceptable up to 150 days. The end of shelf life was estimated at about 161 days for chocolate cakes and 150 days for carrot cakes. This study showed that the current 120 days of shelf life can be extended to 150 days for carrot cake and to 160 days for chocolate cake. However, the 180 days of shelf life desired by the company were not achieved. PRACTICAL APPLICATIONS This research shows the adequacy of using sensory acceptance tests to determine the shelf life of two food products (chocolate and carrot cupcakes). This practical application is useful because the precise determination of the shelf life of a food product is of vital importance for its commercial success. The maximum storage time should always be evaluated in the development or reformulation of new products, changes in packing or storage conditions. Once the physical-chemical and microbiological stability of a product is guaranteed, sensorial changes that could affect consumer acceptance will determine the end of the shelf life of a food product. Thus, the use of sensitive and reliable methods to estimate the sensory shelf life of a product is very important. Findings show the importance of determining the shelf life of each product separately and to avoid using the shelf time estimated for a specific product on other, similar products.