954 resultados para Sample selection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the basis of citation auctions as a new approach to selecting scientific papers for publication. Our main idea is to use an auction for selecting papers for publication through - differently from the state of the art - bids that consist of the number of citations that a scientist expects to receive if the paper is published. Hence, a citation auction is the selection process itself, and no reviewers are involved. The benefits of the proposed approach are two-fold. First, the cost of refereeing will be either totally eliminated or significantly reduced, because the process of citation auction does not need prior understanding of the paper's content to judge the quality of its contribution. Additionally, the method will not prejudge the content of the paper, so it will increase the openness of publications to new ideas. Second, scientists will be much more committed to the quality of their papers, paying close attention to distributing and explaining their papers in detail to maximize the number of citations that the paper receives. Sample analyses of the number of citations collected in papers published in years 1999-2004 for one journal, and in years 2003-2005 for a series of conferences (in a totally different discipline), via Google scholar, are provided. Finally, a simple simulation of an auction is given to outline the behaviour of the citation auction approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the selectivity and market timing performance of a sample of 21 UK property funds over the period Q3 1977 through to Q2 1987. The main finding of which that there is evidence of some superior selectivity performance on the part of UK property funds but that there are few funds who are able to successfully time the market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines differences in net selling price for residential real estate across male and female agents. A sample of 2,020 home sales transactions from Fulton County, Georgia are analyzed in a two-stage least squares, geospatial autoregressive corrected, semi-log hedonic model to test for gender and gender selection effects. Although agent gender seems to play a role in naïve models, its role becomes inconclusive as variables controlling for possible price and time on market expectations of the buyers and sellers are introduced to the models. Clear differences in real estate sales prices, time on market, and agent incomes across genders are unlikely due to differences in negotiation performance between genders or the mix of genders in a two-agent negotiation. The evidence suggests an interesting alternative to agent performance: that buyers and sellers with different reservation price and time on market expectations, such as those selling foreclosure homes, tend to select agents along gender lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. A Monte Carlo study explores the finite sample performance of this procedure and evaluates the forecasting accuracy of models selected by this procedure. Two empirical applications confirm the usefulness of the model selection procedure proposed here for forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing properly and efficiently a supplier has been challenging practitioners and academics since 1960’s. Since then, countless studies had been performed and relevant changes in the business scenario were considered such as global sourcing, quality-orientation, just-in-time practices. It is almost consensus that quality should be the selection driver, however, some polemical findings questioned this general agreement. Therefore, one of the objectives of the study was to identify the supplier selection criteria and bring this discussion back again. Moreover, Dickson (1966) suggested existing business relationship as selection criterion, then it was reviewed the importance of business relationship for the company and noted a set of potential negative effects that could rise from it. By considering these side effects of relationship, this research aimed to investigate how the relationship could influence the supplier selection and how its harmful effects could affect the selection process. The impact of this phenomenon was investigated cross-nationally. The research strategy adopted was a controlled experiment via vignette combined with discrete choice analysis. The data collections were performed in China and Brazil. By examining the results, it could be drawn five major findings. First, when purchasers were asked to declare their supplier selection priorities, quality was stated as the most important independently of country and relationship. This result was consistent with diverse studies since 60’s. However, when purchasers were exposed to a multi-criteria trade-off situation, their actual selection priorities deviate from what they had declared. In the actual decision-making without influence of buyer-supplier relationship, Brazilian purchasers focused on price and Chinese buyers prioritized delivery then price. This observation reinforced some controversial prior studies of Verma & Pullman (1998) and Hirakubo & Kublin (1998). Second, through the introduction of the buyer-supplier relationship (operationalized via relational capital) in the supplier selection process, this research extended the existing studies and found that Brazilian buyers still focused on price. The relationship became just another criterion for supplier selection such as quality and delivery. However, from the Chinese sample, the results suggested that quality was totally discarded and the decision was majorly made through price and relationship. The third finding suggested that relational capital could legitimate the quality and sustainability of the supplier and replaces these selection criteria and made the decisional task less complex. Additionally, with the relational capital, the decision-makings were associated to few biases such as availability cognition, commitment, confirmatory and perceived biases. By analyzing the purchasers’ behavior, relational capital inducted buyers of both countries to relax in their purchasing requirements (quality, delivery and sustainability) leading to potential negative effects. In the Brazilian sample, the phenomenon of willing to pay a higher price for a lower quality offer demonstrated to be a potential counterproductive and suboptimal decision. Finally, the last finding was associated to the cultural effect on the buyers’ decisions. From the outcome, it is possible to observe that if a purchaser’s cultural background is more relation-oriented, the more he will tend to use relational capital as a decision heuristic, thus, the purchaser will be more susceptible to the potential relationship’s side effects

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We nd that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We find that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programas de saúde e bem-estar têm sido adotados por empresas como forma de melhorar a saúde de empregados, e muitos estudos descrevem retornos econômicos positivos sobre os investimentos envolvidos. Entretanto, estudos mais recentes com metodologia melhor têm demonstrado retornos menores. O objetivo deste estudo foi investigar se características de programas de saúde e bem-estar agem como preditores de custos de internação hospitalar (em Reais correntes) e da proporção de funcionários que têm licença médica, entre Abril de 2014 e Maio de 2015, em uma amostra não-aleatória de empresas no Brasil, através de parceria com uma empresa gestora de ‘big data’ para saúde. Um questionário sobre características de programas de saúde no ambiente de trabalho foi respondida por seis grandes empresas brasileiras. Dados retirados destes seis questionários (presença e idade de programa de saúde, suas características – inclusão de atividades de screening, educação sobre saúde, ligação com outros programas da empresa, integração do programa à estrutura da empresa, e ambientes de trabalho voltado para a saúde – e a adoção de incentivos financeiros para aderência de funcionários ao programa), bem como dados individuais de idade, gênero e categoria de plano de saúde de cada empregado , foram usados para construir um banco de dados com mais de 76.000 indivíduos. Através de um modelo de regressão múltipla e seleção ‘stepwise’ de variáveis, a idade do empregado foi positivamente associada e a idade do programa de saúde e a categoria ‘premium’ de plano de saúde do funcionário foram negativamente associadas aos custos de internação hospitalar (como esperado). Inesperadamente, a inclusão de programas de screening e iniciativas de educação de saúde nos programas de saúde e bem-estar nas empresas foram identificados como preditores positivos significativos para custos de admissão hospitalar. Para evitar a inclusão errônea de licenças-maternidade, apenas os dados de licença médica de pacientes do sexo masculino foram analisados (dados disponíveis apenas para duas entre as companhias incluídas, com um total de 18.957 pacientes do sexo masculino). Analisando estes dados através de um teste Z para comparação de proporções, a empresa com programa de saúde que inclui atividades voltadas a cessação de hábitos ruins (como tabagismo e etilismo), controle de diabetes e hipertensão, e que adota incentivos financeiros para a aderência de funcionários ao programa tem menor proporção de empregados com licençca médica no período analisado, quando comparada com a outra empresa que não tem estas características (também conforme esperado). Entretanto, a companhia com menor proporção de funcionários com licença médica também foi aquela que adota programa de screening entre as atividades de seu programa de saúde. Potenciais fontes de ameaça à validade interna e externa destes resultados são discutidas, bem como possíveis explicações para a associação entre programas de screening e educação médica a piores indicadores de saúde nesta amostra de companhias são discutidas. Novos estudos com melhor desenho, com amostras maiores e randômicas são necessários para validar estes resultados e possivelmente melhorar a validade interna e externa destes resultados.