156 resultados para Rational approximations
Resumo:
This paper formalizes in a fully-rational model the popular idea that politiciansperceive an electoral cost in adopting costly reforms with future benefits and reconciles it with the evidence that reformist governments are not punished by voters.To do so, it proposes a model of elections where political ability is ex-ante unknownand investment in reforms is unobservable. On the one hand, elections improve accountability and allow to keep well-performing incumbents. On the other, politiciansmake too little reforms in an attempt to signal high ability and increase their reappointment probability. Although in a rational expectation equilibrium voters cannotbe fooled and hence reelection does not depend on reforms, the strategy of underinvesting in reforms is nonetheless sustained by out-of-equilibrium beliefs. Contrary tothe conventional wisdom, uncertainty makes reforms more politically viable and may,under some conditions, increase social welfare. The model is then used to study howpolitical rewards can be set so as to maximize social welfare and the desirability of imposing a one-term limit to governments. The predictions of this theory are consistentwith a number of empirical regularities on the determinants of reforms and reelection.They are also consistent with a new stylized fact documented in this paper: economicuncertainty is associated to more reforms in a panel of 20 OECD countries.
Resumo:
We extend Aumann's theorem [Aumann 1987], deriving correlated equilibria as a consequence of common priors and common knowledge of rationality, by explicitly allowing for non-rational behavior. Wereplace the assumption of common knowledge of rationality with a substantially weaker one, joint p-belief of rationality, where agents believe the other agents are rational with probability p or more. We show that behavior in this case constitutes a kind of correlated equilibrium satisfying certain p-belief constraints, and that it varies continuously in the parameters p and, for p sufficiently close to one,with high probability is supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend the analysis to characterizing rational expectations of interimtypes, to games of incomplete information, as well as to the case of non-common priors.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
We propose a rule of decision-making, the sequential procedure guided byroutes, and show that three influential boundedly rational choice models can be equivalentlyunderstood as special cases of this rule. In addition, the sequential procedure guidedby routes is instrumental in showing that the three models are intimately related. We showthat choice with a status-quo bias is a refinement of rationalizability by game trees, which, inturn, is also a refinement of sequential rationalizability. Thus, we provide a sharp taxonomyof these choice models, and show that they all can be understood as choice by sequentialprocedures.
Resumo:
Researchers have used stylized facts on asset prices and trading volumein stock markets (in particular, the mean reversion of asset returnsand the correlations between trading volume, price changes and pricelevels) to support theories where agents are not rational expected utilitymaximizers. This paper shows that this empirical evidence is in factconsistent with a standard infite horizon perfect information expectedutility economy where some agents face leverage constraints similar tothose found in todays financial markets. In addition, and in sharpcontrast to the theories above, we explain some qualitative differencesthat are observed in the price-volume relation on stock and on futuresmarkets. We consider a continuous-time economy where agents maximize theintegral of their discounted utility from consumption under both budgetand leverage con-straints. Building on the work by Vila and Zariphopoulou(1997), we find a closed form solution, up to a negative constant, for theequilibrium prices and demands in the region of the state space where theconstraint is non-binding. We show that, at the equilibrium, stock holdingsvolatility as well as its ratio to stock price volatility are increasingfunctions of the stock price and interpret this finding in terms of theprice-volume relation.
Resumo:
This paper analyzes the choice between limit and market orders in animperfectly competitive noisy rational expectations economy. There is a uniqueinsider, who takes into account the effect their trading has on prices. If theinsider behaves as a price taker, she will choose market orders if her privateinformation is very precise and she will choose limit orders otherwise. On thecontrary, if the insider recognizes and exploits her ability to affect themarket price, her optimal choice is to place limit orders whatever the precisionof her private information.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
This paper extends multivariate Granger causality to take into account the subspacesalong which Granger causality occurs as well as long run Granger causality. The propertiesof these new notions of Granger causality, along with the requisite restrictions, are derivedand extensively studied for a wide variety of time series processes including linear invertibleprocess and VARMA. Using the proposed extensions, the paper demonstrates that: (i) meanreversion in L2 is an instance of long run Granger non-causality, (ii) cointegration is a specialcase of long run Granger non-causality along a subspace, (iii) controllability is a special caseof Granger causality, and finally (iv) linear rational expectations entail (possibly testable)Granger causality restriction along subspaces.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.
Resumo:
This paper presents a case study of a well-informed investor in the South Sea bubble. We argue that Hoare's Bank, a fledgling West End London banker, knew that a bubble was in progress and nonetheless invested in the stock; it was profitable to "ride the bubble." Using a unique dataset on daily trades, we show that this sophisticated investor was not constrained by institutional factors such as restrictions on short sales or agency problems. Instead, this study demonstrates that predictable investor sentiment can prevent attacks on a bubble; rational investors may only attack when some coordinating event promotes joint action.
Resumo:
This paper presents a dynamic choice model in the attributespace considering rational consumers that discount the future. In lightof the evidence of several state-dependence patterns, the model isfurther extended by considering a utility function that allows for thedifferent types of behavior described in the literature: pure inertia,pure variety seeking and hybrid. The model presents a stationaryconsumption pattern that can be inertial, where the consumer only buysone product, or a variety-seeking one, where the consumer buys severalproducts simultane-ously. Under the inverted-U marginal utilityassumption, the consumer behaves inertial among the existing brands forseveral periods, and eventually, once the stationary levels areapproached, the consumer turns to a variety-seeking behavior. An empiricalanalysis is run using a scanner database for fabric softener andsignificant evidence of hybrid behavior for most attributes is found,which supports the functional form considered in the theory.
Resumo:
We test in the laboratory the potential of evolutionary dynamics as predictor of actual behavior. To this end, we propose an asymmetricgame -which we interpret as a borrowerlender relation-, study itsevolutionary dynamics in a random matching set-up, and tests itspredictions. The model provides conditions for the existence ofcredit markets and credit cycles. The theoretical predictions seemto be good approximations of the experimental results.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
This paper is concerned with the realism of mechanisms that implementsocial choice functions in the traditional sense. Will agents actually playthe equilibrium assumed by the analysis? As an example, we study theconvergence and stability properties of Sj\"ostr\"om's (1994) mechanism, onthe assumption that boundedly rational players find their way to equilibriumusing monotonic learning dynamics and also with fictitious play. Thismechanism implements most social choice functions in economic environmentsusing as a solution concept the iterated elimination of weakly dominatedstrategies (only one round of deletion of weakly dominated strategies isneeded). There are, however, many sets of Nash equilibria whose payoffs maybe very different from those desired by the social choice function. Withmonotonic dynamics we show that many equilibria in all the sets ofequilibria we describe are the limit points of trajectories that havecompletely mixed initial conditions. The initial conditions that lead tothese equilibria need not be very close to the limiting point. Furthermore,even if the dynamics converge to the ``right'' set of equilibria, it stillcan converge to quite a poor outcome in welfare terms. With fictitious play,if the agents have completely mixed prior beliefs, beliefs and play convergeto the outcome the planner wants to implement.