51 resultados para nonlinear rational expectations models
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
We present a model of shadow banking in which financial intermediaries originate and trade loans, assemble these loans into diversified portfolios, and then finance these portfolios externally with riskless debt. In this model: i) outside investor wealth drives the demand for riskless debt and indirectly for securitization, ii) intermediary assets and leverage move together as in Adrian and Shin (2010), and iii) intermediaries increase their exposure to systematic risk as they reduce their idiosyncratic risk through diversification, as in Acharya, Schnabl, and Suarez (2010). Under rational expectations, the shadow banking system is stable and improves welfare. When investors and intermediaries neglect tail risks, however, the expansion of risky lending and the concentration of risks in the intermediaries create financial fragility and fluctuations in liquidity over time.
Resumo:
We extend Aumann's theorem [Aumann 1987], deriving correlated equilibria as a consequence of common priors and common knowledge of rationality, by explicitly allowing for non-rational behavior. Wereplace the assumption of common knowledge of rationality with a substantially weaker one, joint p-belief of rationality, where agents believe the other agents are rational with probability p or more. We show that behavior in this case constitutes a kind of correlated equilibrium satisfying certain p-belief constraints, and that it varies continuously in the parameters p and, for p sufficiently close to one,with high probability is supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend the analysis to characterizing rational expectations of interimtypes, to games of incomplete information, as well as to the case of non-common priors.
Resumo:
We propose a rule of decision-making, the sequential procedure guided byroutes, and show that three influential boundedly rational choice models can be equivalentlyunderstood as special cases of this rule. In addition, the sequential procedure guidedby routes is instrumental in showing that the three models are intimately related. We showthat choice with a status-quo bias is a refinement of rationalizability by game trees, which, inturn, is also a refinement of sequential rationalizability. Thus, we provide a sharp taxonomyof these choice models, and show that they all can be understood as choice by sequentialprocedures.
Resumo:
This paper analyzes the choice between limit and market orders in animperfectly competitive noisy rational expectations economy. There is a uniqueinsider, who takes into account the effect their trading has on prices. If theinsider behaves as a price taker, she will choose market orders if her privateinformation is very precise and she will choose limit orders otherwise. On thecontrary, if the insider recognizes and exploits her ability to affect themarket price, her optimal choice is to place limit orders whatever the precisionof her private information.
Resumo:
This paper extends multivariate Granger causality to take into account the subspacesalong which Granger causality occurs as well as long run Granger causality. The propertiesof these new notions of Granger causality, along with the requisite restrictions, are derivedand extensively studied for a wide variety of time series processes including linear invertibleprocess and VARMA. Using the proposed extensions, the paper demonstrates that: (i) meanreversion in L2 is an instance of long run Granger non-causality, (ii) cointegration is a specialcase of long run Granger non-causality along a subspace, (iii) controllability is a special caseof Granger causality, and finally (iv) linear rational expectations entail (possibly testable)Granger causality restriction along subspaces.
Resumo:
We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.
Resumo:
This paper documents that at the individual stock level insiders sales peak many months before a large drop in the stock price, while insiders purchases peak only the month before a large jump. We provide a theoretical explanation for this phenomenon based on trading constraints and asymmetric information. We test our hypothesis against competing stories such as patterns of insider trading driven by earnings announcement dates, or insiders timing their trades to evade prosecution. Finally we provide new evidence regarding crashes and the degree of information asymmetry.
Resumo:
We study financial markets in which both rational and overconfident agents coexist and make endogenous information acquisition decisions. We demonstrate the following irrelevance result: when a positive fraction of rational agents (endogeneously) decides to become informed in equilibrium, prices are set as if all investors were rational, and as a consequence the overconfidence bias does not aect informational efficiency, price volatility, rational traders expected profits or their welfare. Intuitively, as overconfidence goes up, so does price infornativeness, which makes rational agents cut their information acquisition activities, effectively undoing the standard effect of more aggressive trading by the overconfident.
Resumo:
This paper uses a model of boundedly rational learning to accountfor the observations of recurrent hyperinflations in the lastdecade. We study a standard monetary model where the fullyrational expectations assumption is replaced by a formaldefinition of quasi-rational learning. The model under learningis able to match remarkably well some crucial stylized factsobserved during the recurrent hyperinflations experienced byseveral countries in the 80's. We argue that, despite being asmall departure from rational expectations, quasi-rationallearning does not preclude falsifiability of the model and itdoes not violate reasonable rationality requirements.
Resumo:
En este trabajo examinamos si la teoría de expectativas con primas de liquidez constantes puede explicar la estructura temporal de los tipos de interés de pequeños vencimientos en el mercado interbancario de depósitos español, para datos mensuales desde 1977 hasta 1995. Utilizamos el contraste de Campbell y Shiller (1987) basado en un modelo VAR cointegrado. A partir de las estimaciones consistentes de dicho modelo obtenemos la magnitud y persistencia de los shocks a través de la simulación de la respuesta al impulso, y estimaciones eficientes de los parámetros modelizando la varianza condicional que es variable en el tiempo. En este sentido, se proponen varios esquemas de volatilidad que permiten plantear distintas aproximaciones de laincertidumbre en un entorno multiecuacional GARCH y que están basadas en el modelo de expectativas propuesto. La evidencia empírica muestra que se incumple la teoría de las expectativas, que existe una dinámica conjunta a corto plazo para los tipos de interés y el diferencial que está definida por un modelo VAR(4)-GARCH( 1,1)-BEKK (que está próximo a la integrabilidad en varianza), y que existen distintos factores de riesgo que afectan a las primas en los plazos estudiados
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.
Resumo:
Expectations are central to behaviour. Despite the existence of subjective expectations data, the standard approach is to ignore these, to hypothecate a model of behaviour and to infer expectations from realisations. In the context of income models, we reveal the informational gain obtained from using both a canonical model and subjective expectations data. We propose a test for this informational gain, and illustrate our approach with an application to the problem of measuring income risk.
Resumo:
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.