9 resultados para robust maximum likelihood estimation

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho examinou as características de carteiras compostas por ações e otimizadas segundo o critério de média-variância e formadas através de estimativas robustas de risco e retorno. A motivação para isto é a distribuição típica de ativos financeiros (que apresenta outliers e mais curtose que a distribuição normal). Para comparação entre as carteiras, foram consideradas suas propriedades: estabilidade, variabilidade e os índices de Sharpe obtidos pelas mesmas. O resultado geral mostra que estas carteiras obtidas através de estimativas robustas de risco e retorno apresentam melhoras em sua estabilidade e variabilidade, no entanto, esta melhora é insuficiente para diferenciar os índices de Sharpe alcançados pelas mesmas das carteiras obtidas através de método de máxima verossimilhança para estimativas de risco e retorno.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses an output oriented Data Envelopment Analysis (DEA) measure of technical efficiency to assess the technical efficiencies of the Brazilian banking system. Four approaches to estimation are compared in order to assess the significance of factors affecting inefficiency. These are nonparametric Analysis of Covariance, maximum likelihood using a family of exponential distributions, maximum likelihood using a family of truncated normal distributions, and the normal Tobit model. The sole focus of the paper is on a combined measure of output and the data analyzed refers to the year 2001. The factors of interest in the analysis and likely to affect efficiency are bank nature (multiple and commercial), bank type (credit, business, bursary and retail), bank size (large, medium, small and micro), bank control (private and public), bank origin (domestic and foreign), and non-performing loans. The latter is a measure of bank risk. All quantitative variables, including non-performing loans, are measured on a per employee basis. The best fits to the data are provided by the exponential family and the nonparametric Analysis of Covariance. The significance of a factor however varies according to the model fit although it can be said that there is some agreements between the best models. A highly significant association in all models fitted is observed only for nonperforming loans. The nonparametric Analysis of Covariance is more consistent with the inefficiency median responses observed for the qualitative factors. The findings of the analysis reinforce the significant association of the level of bank inefficiency, measured by DEA residuals, with the risk of bank failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most widely used updating rule for non-additive probalities is the Dempster-Schafer rule. Schmeidles and Gilboa have developed a model of decision making under uncertainty based on non-additive probabilities, and in their paper “Updating Ambiguos Beliefs” they justify the Dempster-Schafer rule based on a maximum likelihood procedure. This note shows in the context of Schmeidler-Gilboa preferences under uncertainty, that the Dempster-Schafer rule is in general not ex-ante optimal. This contrasts with Brown’s result that Bayes’ rule is ex-ante optimal for standard Savage preferences with additive probabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Local provision of public services has the positive effect of increasing the efficiency because each locality has its idiosyncrasies that determine a particular demand for public services. This dissertation addresses different aspects of the local demand for public goods and services and their relationship with political incentives. The text is divided in three essays. The first essay aims to test the existence of yardstick competition in education spending using panel data from Brazilian municipalities. The essay estimates two-regime spatial Durbin models with time and spatial fixed effects using maximum likelihood, where the regimes represent different electoral and educational accountability institutional settings. First, it is investigated whether the lame duck incumbents tend to engage in less strategic interaction as a result of the impossibility of reelection, which lowers the incentives for them to signal their type (good or bad) to the voters by mimicking their neighbors’ expenditures. Additionally, it is evaluated whether the lack of electorate support faced by the minority governments causes the incumbents to mimic the neighbors’ spending to a greater extent to increase their odds of reelection. Next, the essay estimates the effects of the institutional change introduced by the disclosure on April 2007 of the Basic Education Development Index (known as IDEB) and its goals on the strategic interaction at the municipality level. This institutional change potentially increased the incentives for incumbents to follow the national best practices in an attempt to signal their type to voters, thus reducing the importance of local information spillover. The same model is also tested using school inputs that are believed to improve students’ performance in place of education spending. The results show evidence for yardstick competition in education spending. Spatial auto-correlation is lower among the lame ducks and higher among the incumbents with minority support (a smaller vote margin). In addition, the institutional change introduced by the IDEB reduced the spatial interaction in education spending and input-setting, thus diminishing the importance of local information spillover. The second essay investigates the role played by the geographic distance between the poor and non-poor in the local demand for income redistribution. In particular, the study provides an empirical test of the geographically limited altruism model proposed in Pauly (1973), incorporating the possibility of participation costs associated with the provision of transfers (Van de Wale, 1998). First, the discussion is motivated by allowing for an “iceberg cost” of participation in the programs for the poor individuals in Pauly’s original model. Next, using data from the 2000 Brazilian Census and a panel of municipalities based on the National Household Sample Survey (PNAD) from 2001 to 2007, all the distance-related explanatory variables indicate that an increased proximity between poor and non-poor is associated with better targeting of the programs (demand for redistribution). For instance, a 1-hour increase in the time spent commuting by the poor reduces the targeting by 3.158 percentage points. This result is similar to that of Ashworth, Heyndels and Smolders (2002) but is definitely not due to the program leakages. To empirically disentangle participation costs and spatially restricted altruism effects, an additional test is conducted using unique panel data based on the 2004 and 2006 PNAD, which assess the number of benefits and the average benefit value received by beneficiaries. The estimates suggest that both cost and altruism play important roles in targeting determination in Brazil, and thus, in the determination of the demand for redistribution. Lastly, the results indicate that ‘size matters’; i.e., the budget for redistribution has a positive impact on targeting. The third essay aims to empirically test the validity of the median voter model for the Brazilian case. Information on municipalities are obtained from the Population Census and the Brazilian Supreme Electoral Court for the year 2000. First, the median voter demand for local public services is estimated. The bundles of services offered by reelection candidates are identified as the expenditures realized during incumbents’ first term in office. The assumption of perfect information of candidates concerning the median demand is relaxed and a weaker hypothesis, of rational expectation, is imposed. Thus, incumbents make mistakes about the median demand that are referred to as misperception errors. Thus, at a given point in time, incumbents can provide a bundle (given by the amount of expenditures per capita) that differs from median voter’s demand for public services by a multiplicative error term, which is included in the residuals of the demand equation. Next, it is estimated the impact of the module of this misperception error on the electoral performance of incumbents using a selection models. The result suggests that the median voter model is valid for the case of Brazilian municipalities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Duas classes de modelos buscam explicar o padrão de ajustamento de preço das firmas: modelos tempo-dependente e estado-dependente. O objetivo deste trabalho é levantar algumas evidencias empíricas de modo a distinguir os modelos, ou seja, identificar de que maneira as firmas realmente precificam. Para isso, escolheu-se a grande desvalorização cambial de 1999 como principal ferramenta e ambiente de análise. A hipótese fundamental é que o choque cambial impacta significativamente o custo de algumas indústrias, em alguns casos induzindo-as a alterarem seus preço após o choque. A partir de uma imensa base de micro dados formada por preços que compõem o CPI, algumas estimações importantes como a probabilidade e a magnitude média das trocas foram levantadas. A magnitude é dada por uma média simples, enquanto a probabilidade é estimada pelo método da máxima verossimilhança. Os resultados indicam um comportamento de precificação similar ao proposto por modelos estado-dependente.