13 resultados para sparse Bayesian regression
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
We transform a non co-operati ve game into a -Bayesian decision problem for each player where the uncertainty faced by a player is the strategy choices of the other players, the pr iors of other players on the choice of other players, the priors over priors and so on.We provide a complete characterization between the extent of knowledge about the rationality of players and their ability to successfulIy eliminate strategies which are not best responses. This paper therefore provides the informational foundations of iteratively unàominated strategies and rationalizable strategic behavior (Bernheim (1984) and Pearce (1984». Moreover, sufficient condi tions are also found for Nash equilibrium behavior. We also provide Aumann's (1985) results on correlated equilibria .
Resumo:
We exploit a discontinuity in Brazilian municipal election rules to investigate whether political competition has a causal impact on policy choices. In municipalities with less than 200,000 voters mayors are elected with a plurality of the vote. In municipalities with more than 200,000 voters a run-off election takes place among the top two candidates if neither achieves a majority of the votes. At a first stage, we show that the possibility of runoff increases political competition. At a second stage, we use the discontinuity as a source of exogenous variation to infer causality from political competition to fiscal policy. Our second stage results suggest that political competition induces more investment and less current spending, particularly personnel expenses. Furthermore, the impact of political competition is larger when incumbents can run for reelection, suggesting incentives matter insofar as incumbents can themselves remain in office.
Resumo:
Economias emergentes sofrem importantes restrições de crédito quando comparadas com economias desenvolvidas, entretanto, modelos estocásticos de equilíbrio geral (DSGE) desenhados para economias emergentes ainda precisam avançar nessa discussão. Nós propomos um modelo DSGE que pretende representar uma economia emergente com setor bancário baseado em Gerali et al. (2010). Nossa contribuição é considerar uma parcela da renda esperada como colateral para empréstimos das famílias. Nós estimamos o modelo proposto para o Brasil utilizando estimação Bayesiana e encontramos que economias que sofrem restrição de colateral por parte das famílias tendem a sentir o impacto de choques monetários mais rapidamente devido a exposição do setor bancário a mudanças no salário esperado.
Resumo:
This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).
Resumo:
This paper presents new methodology for making Bayesian inference about dy~ o!s for exponential famiIy observations. The approach is simulation-based _~t> use of ~vlarkov chain Monte Carlo techniques. A yletropolis-Hastings i:U~UnLlllll 1::; combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes are derived and compared. The approach is fully Bayesian in obtaining posterior samples for state parameters and unknown hyperparameters. Illustrations to real data sets with sparse counts and missing values are presented. Extensions to accommodate for general distributions for observations and disturbances. intervention. non-linear models and rnultivariate time series are outlined.
Resumo:
This paper provides a systematic and unified treatment of the developments in the area of kernel estimation in econometrics and statistics. Both the estimation and hypothesis testing issues are discussed for the nonparametric and semiparametric regression models. A discussion on the choice of windowwidth is also presented.
Resumo:
The aim of this paper is to analyze extremal events using Generalized Pareto Distributions (GPD), considering explicitly the uncertainty about the threshold. Current practice empirically determines this quantity and proceeds by estimating the GPD parameters based on data beyond it, discarding all the information available be10w the threshold. We introduce a mixture model that combines a parametric form for the center and a GPD for the tail of the distributions and uses all observations for inference about the unknown parameters from both distributions, the threshold inc1uded. Prior distribution for the parameters are indirectly obtained through experts quantiles elicitation. Posterior inference is available through Markov Chain Monte Carlo (MCMC) methods. Simulations are carried out in order to analyze the performance of our proposed mode1 under a wide range of scenarios. Those scenarios approximate realistic situations found in the literature. We also apply the proposed model to a real dataset, Nasdaq 100, an index of the financiai market that presents many extreme events. Important issues such as predictive analysis and model selection are considered along with possible modeling extensions.
Resumo:
Modelos de tomada de decisão necessitam refletir os aspectos da psi- cologia humana. Com este objetivo, este trabalho é baseado na Sparse Distributed Memory (SDM), um modelo psicologicamente e neuro- cientificamente plausível da memória humana, publicado por Pentti Kanerva, em 1988. O modelo de Kanerva possui um ponto crítico: um item de memória aquém deste ponto é rapidamente encontrado, e items além do ponto crítico não o são. Kanerva calculou este ponto para um caso especial com um seleto conjunto de parâmetros (fixos). Neste trabalho estendemos o conhecimento deste ponto crítico, através de simulações computacionais, e analisamos o comportamento desta “Critical Distance” sob diferentes cenários: em diferentes dimensões; em diferentes números de items armazenados na memória; e em diferentes números de armazenamento do item. Também é derivada uma função que, quando minimizada, determina o valor da “Critical Distance” de acordo com o estado da memória. Um objetivo secundário do trabalho é apresentar a SDM de forma simples e intuitiva para que pesquisadores de outras áreas possam imaginar como ela pode ajudá-los a entender e a resolver seus problemas.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
Este trabalho minera as informações coletadas no processo de vestibular entre 2009 e 2012 para o curso de graduação de administração de empresas da FGV-EAESP, para estimar classificadores capazes de calcular a probabilidade de um novo aluno ter bom desempenho. O processo de KDD (Knowledge Discovery in Database) desenvolvido por Fayyad et al. (1996a) é a base da metodologia adotada e os classificadores serão estimados utilizando duas ferramentas matemáticas. A primeira é a regressão logística, muito usada por instituições financeiras para avaliar se um cliente será capaz de honrar com seus pagamentos e a segunda é a rede Bayesiana, proveniente do campo de inteligência artificial. Este estudo mostre que os dois modelos possuem o mesmo poder discriminatório, gerando resultados semelhantes. Além disso, as informações que influenciam a probabilidade de o aluno ter bom desempenho são a sua idade no ano de ingresso, a quantidade de vezes que ele prestou vestibular da FGV/EAESP antes de ser aprovado, a região do Brasil de onde é proveniente e as notas das provas de matemática fase 01 e fase 02, inglês, ciências humanas e redação. Aparentemente o grau de formação dos pais e o grau de decisão do aluno em estudar na FGV/EAESP não influenciam nessa probabilidade.
Resumo:
Economic performance increasingly relies on global economic environment due to the growing importance of trade and nancial links among countries. Literature on growth spillovers shows various gains obtained by this interaction. This work aims at analyzing the possible e ects of a potential economic growth downturn in China, Germany and United States on the growth of other economies. We use global autoregressive regression approach to assess interdependence among countries. Two types of phenomena are simulated. The rst one is a one time shock that hit these economies. Our simulations use a large shock of -2.5 standard deviations, a gure very similar to what we saw back in the 2008 crises. The second experiment simulate the e ect of a hypothetical downturn of the aforementioned economies. Our results suggest that the United States play the role of a global economy a ecting countries across the globe whereas Germany and China play a regional role.