124 resultados para parametric implicit vector equilibrium problems
Resumo:
We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).
Resumo:
There is a large and growing literature that studies the effects of weak enforcement institutions on economic performance. This literature has focused almost exclusively on primary markets, in which assets are issued and traded to improve the allocation of investment and consumption. The general conclusion is that weak enforcement institutions impair the workings of these markets, giving rise to various inefficiencies.But weak enforcement institutions also create incentives to develop secondary markets, in which the assets issued in primary markets are retraded. This paper shows that trading in secondary markets counteracts the effects of weak enforcement institutions and, in the absence of further frictions, restores efficiency.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
I study the effects of the heterogeneity of traders'horizon in the context of a 2-period NREE model whereall traders are risk averse. Owing to inventory effects,myopic trading behavior generates multiplicity ofequilibria. In particular, two distinct patterns arise.Along the first equilibrium, short term tradersanticipate higher second period price reaction toinformation arrival and, owing to risk aversion,scale back their trading intensity. This, in turn,reduces both risk sharing and information impoundinginto prices enforcing a high returns' volatility-lowprice informativeness equilibrium. In the second one,the opposite happens and a low volatility-high priceinformativeness equilibrium arises.
Resumo:
This paper combines multivariate density forecasts of output growth, inflationand interest rates from a suite of models. An out-of-sample weighting scheme based onthe predictive likelihood as proposed by Eklund and Karlsson (2005) and Andersson andKarlsson (2007) is used to combine the models. Three classes of models are considered: aBayesian vector autoregression (BVAR), a factor-augmented vector autoregression (FAVAR)and a medium-scale dynamic stochastic general equilibrium (DSGE) model. Using Australiandata, we find that, at short forecast horizons, the Bayesian VAR model is assignedthe most weight, while at intermediate and longer horizons the factor model is preferred.The DSGE model is assigned little weight at all horizons, a result that can be attributedto the DSGE model producing density forecasts that are very wide when compared withthe actual distribution of observations. While a density forecast evaluation exercise revealslittle formal evidence that the optimally combined densities are superior to those from thebest-performing individual model, or a simple equal-weighting scheme, this may be a resultof the short sample available.
Resumo:
We analyze a standard environment of adverse selection in credit markets. In our environment,entrepreneurs who are privately informed about the quality of their projects needto borrow in order to invest. Conventional wisdom says that, in this class of economies, thecompetitive equilibrium is typically inefficient.We show that this conventional wisdom rests on one implicit assumption: entrepreneurscan only access monitored lending. If a new set of markets is added to provide entrepreneurswith additional funds, efficiency can be attained in equilibrium. An important characteristic ofthese additional markets is that lending in them must be unmonitored, in the sense that it doesnot condition total borrowing or investment by entrepreneurs. This makes it possible to attainefficiency by pooling all entrepreneurs in the new markets while separating them in the marketsfor monitored loans.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
We study a general equilibrium model in which entrepreneurs finance investment with optimal financial contracts. Because of enforceability problems, contracts are constrained efficient. We show that limited enforceability amplifies the impact of technological innovations on aggregate output. More generally, we show that lower enforceability of contracts will be associated with greater aggregate volatility. A key assumption for this result is that defaulting entrepreneurs are not excluded from the market.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
There are two fundamental puzzles about trade credit: why does it appearto be so expensive,and why do input suppliers engage in the business oflending money? This paper addresses and answers both questions analysingthe interaction between the financial and the industrial aspects of thesupplier-customer relationship. It examines how, in a context of limitedenforceability of contracts, suppliers may have a comparative advantageover banks in lending to their customers because they hold the extrathreat of stopping the supply of intermediate goods. Suppliers may alsoact as lenders of last resort, providing insurance against liquidityshocks that may endanger the survival of their customers. The relativelyhigh implicit interest rates of trade credit result from the existenceof default and insurance premia. The implications of the model areexamined empirically using parametric and nonparametric techniques on apanel of UK firms.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
In this paper, we incorporate a positive theory of unemployment insuranceinto a dynamic overlapping generations model with search-matching frictionsand on-the-job learning-by-doing. The model shows that societies populatedby identical rational agents, but differing in the initial distributionof human capital across agents, may choose very different unemploymentinsurance levels in a politico-economic equilibrium. The interactionbetween the political decision about the level of the unemployment insuranceand the optimal search behavior of the unemployed gives rise to aself-reinforcing mechanism whichmay generate multiple steady-stateequilibria. In particular, a European-type steady-state with highunemployment, low employment turnover and high insurance can co-exist withan American-type steady-state with low unemployment, high employment turnoverand low unemployment insurance. A calibrated version of the model featurestwo distinct steady-state equilibria with unemployment levels and durationrates resembling those of the U.S. and Europe, respectively.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
It is shown that in any affine space of payoff matrices the equilibriumpayoffs of bimatrix games are generically finite.