10 resultados para Quasi-likelihood estimator

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho examinou as características de carteiras compostas por ações e otimizadas segundo o critério de média-variância e formadas através de estimativas robustas de risco e retorno. A motivação para isto é a distribuição típica de ativos financeiros (que apresenta outliers e mais curtose que a distribuição normal). Para comparação entre as carteiras, foram consideradas suas propriedades: estabilidade, variabilidade e os índices de Sharpe obtidos pelas mesmas. O resultado geral mostra que estas carteiras obtidas através de estimativas robustas de risco e retorno apresentam melhoras em sua estabilidade e variabilidade, no entanto, esta melhora é insuficiente para diferenciar os índices de Sharpe alcançados pelas mesmas das carteiras obtidas através de método de máxima verossimilhança para estimativas de risco e retorno.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a two-step estimator for panel data models in which a binary covariate is endogenous. In the first stage, a random-effects probit model is estimated, having the endogenous variable as the left-hand side variable. Correction terms are then constructed and included in the main regression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brazil is growing around 1% per capita a year from 1981; this means for a country that is supposed to catch up, quasi-stagnation. Four historical new facts explain why growth was so low after the Real Plan: the reduction of public savings, and three facts that reduce private investments: the end of the unlimited supply of labor, a very high interest rate, and the 1990 dismantling of the mechanism that neutralized the Dutch disease, which represented a major competitive disadvantage for the manufacturing industry. New-developmental theory offers an explanation and two solutions for the problem, but does not underestimate the political economy problems involved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nós usamos a metodologia de Regressões em Descontinuidade (RDD) para estimar o efeito causal do Fundo de Participação dos Municípios (FPM) recebido por um município sobre características dos municípios vizinhos, considerando uma variedade de temas: finanças públicas, educação, saúde e resultados eleitorais. Nós exploramos a regra que gera uma variação exógena da transferência em munícipios próximos às descontinuidades no repasse do fundo de acordo com faixas de população. Nossa principal contribuição é estimar separadamente e em conjunto o efeito spillover e o efeito direto do FPM, considerando ambos municípios vizinhos ou apenas um deles próximos às mudanças de faixa. Dessa forma, conseguimos entender melhor a interação entre municípios vizinhos quando há uma correlação na probabilidade de receber uma transferência federal. Nós mostramos que a estimativa do efeito direto do FPM sobre os gastos locais diminui em cerca de 20% quando controlamos pelo spillover do vizinho, que em geral é positivo, com exceção dos gastos em saúde e saneamento. Nós estimamos um efeito positivo da transferência sobre notas na prova Brasil e taxas de aprovação escolares em municípios vizinhos e na rede estadual do ensino fundamental. Por outro lado, o recebimento de FPM por municípios vizinhos de pequena população reduz o provimento de bens e serviços de saúde em cidades próximas e maiores, o que pode ocorrer devido à redução da demanda por serviços de saúde. A piora de alguns indicadores globais de saúde é um indício, no entanto, de que podem existir problemas de coordenação para os prefeitos reterem seus gastos em saúde. De fato, quando controlamos pela margem de vitória nas eleições municipais e consideramos apenas cidades vizinhas com prefeitos de partido diferentes, o efeito spillover é maior em magnitude, o que indica que incentivos políticos são importantes para explicar a subprovisão de serviços em saúde, por um lado, e o aumento da provisão de bens em educação, por outro. Nós também constatamos um efeito positivo do FPM sobre votos para o partido do governo federal nas eleições municipais e nacionais, e grande parte desse efeito é explicado pelo spillover do FPM de cidades vizinhas, mostrando que cidades com dependência econômica do governo federal se tornam a base de sustentação e apoio político desse governo. Por fim, nós encontramos um efeito ambíguo do aumento de receita devido ao FPM sobre a competição eleitoral nas eleições municipais, com uma queda da margem de vitória do primeiro colocado e uma redução do número de candidatos, o que pode ser explicado pelo aumento do custo fixo das campanhas locais.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.