145 resultados para bayesian learning
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
In this work I study the stability of the dynamics generated by adaptivelearning processes in intertemporal economies with lagged variables. Iprove that determinacy of the steady state is a necessary condition for the convergence of the learning dynamics and I show that the reciprocal is not true characterizing the economies where convergence holds. In the case of existence of cycles I show that there is not, in general, a relationship between determinacy and convergence of the learning process to the cycle. I also analyze the expectational stability of these equilibria.
Resumo:
Utilizing the well-known Ultimatum Game, this note presents the following phenomenon. If we start with simple stimulus-response agents,learning through naive reinforcement, and then grant them some introspective capabilities, we get outcomes that are not closer but farther away from the fully introspective game-theoretic approach. The cause of this is the following: there is an asymmetry in the information that agents can deduce from their experience, and this leads to a bias in their learning process.
Resumo:
We formulate an evolutionary learning process in the spirit ofYoung (1993a) for games of incomplete information. The process involves trembles. For many games, if the amount of trembling is small, play will be in accordance with the games' (semi-strict) Bayesian equilibria most of the time. This supports the notion of Bayesian equilibrium. Further, often play will most of the time be in accordance with exactly one Bayesian equilibrium. This gives a selection among the Bayesian equilibria. For two specific games of economic interest wecharacterize this selection. The first is an extension to incomplete information of the prototype strategic conflict known as Chicken. The second is an incomplete information bilateral monopoly, which is also an extension to incompleteinformation of Nash's demand game, or a simple version ofthe so-called sealed bid double auction. For both gamesselection by evolutionary learning is in favor of Bayesianequilibria where some types of players fail to coordinate, such that the outcome is inefficient.
Resumo:
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for a particular type of diffuse, for Minnesota-type and for hierarchical priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.
Resumo:
This paper uses a model of boundedly rational learning to accountfor the observations of recurrent hyperinflations in the lastdecade. We study a standard monetary model where the fullyrational expectations assumption is replaced by a formaldefinition of quasi-rational learning. The model under learningis able to match remarkably well some crucial stylized factsobserved during the recurrent hyperinflations experienced byseveral countries in the 80's. We argue that, despite being asmall departure from rational expectations, quasi-rationallearning does not preclude falsifiability of the model and itdoes not violate reasonable rationality requirements.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.