39 resultados para Non-Stationary Monetary Allocations
Resumo:
In this paper we study a model where non-cooperative agents may exchange knowledge in a competitive environment. As a potential factor that could induce the knowledge disclosure between humans we consider the timing of the moves of players. We develop a simple model of a multistage game in which there are only three players and competition takes place only within two stages. Players can share their private knowledge with their opponents and the knowledge is modelled as in uencing their marginal cost of e¤ort. We identify two main mechanisms that work towards knowledge disclosure. One of them is that before the actual competition starts, the stronger player of the rst stage of a game may have desire to share his knowledge with the "observer", be- cause this reduces the valuation of the prize of the weaker player of that stage and as a result his e¤ort level and probability of winning in a ght. Another mechanism is that the "observer" may have sometimes desire to share knowledge with the weaker player of the rst stage, because in this way, by increasing his probability of winning in that stage, he decreases the probability of winning of the stronger player. As a result, in the second stage the "observer" may have greater chances to meet the weaker player rather than the stronger one. Keywords: knowledge sharing, strategic knowledge disclosure, multistage contest game, non-cooperative games
Resumo:
This paper uses a micro-founded DSGE model to compare second-best optimal environmental policy and the resulting allocation to first-best allocation. The focus is on the source and size of uncertainty, and how this affects optimal choices and the inferiority of second best vis-à-vis first best.
Resumo:
This study utilizes a macro-based VAR framework to investigate whether stock portfolios formedon the basis of their value, size and past performance characteristics are affected in a differentialmanner by unexpected US monetary policy actions during the period 1967-2007. Full sample results show that value, small capitalization and past loser stocks are more exposed to monetary policy shocks in comparison to growth, big capitalization and past winner stocks. Subsample analysis, motivated by variation in the realized premia and parameter instability, reveals that monetary policy shocks’ impact on these portfolios is significant and pronounced only during the pre-1983 period.
Resumo:
These notes try to clarify some discussions on the formulation of individual intertemporal behavior under adaptive learning in representative agent models. First, we discuss two suggested approaches and related issues in the context of a simple consumption-saving model. Second, we show that the analysis of learning in the NewKeynesian monetary policy model based on “Euler equations” provides a consistent and valid approach.
Resumo:
Cecchetti et al. (2006) develop a method for allocating macroeconomic performance changes among the structure of the economy, variability of supply shocks and monetary policy. We propose a dual approach of their method by borrowing well-known tools from production theory, namely the Farrell measure and the Malmquist index. Following FÄare et al (1994) we propose a decomposition of the efficiency of monetary policy. It is shown that the global efficiency changes can be rewritten as the product of the changes in macroeconomic performance, minimum quadratic loss, and efficiency frontier.
Resumo:
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear - quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of pol icy performance is a linear combinat ion of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Resumo:
This paper investigates underlying changes in the UK economy over the past thirtyfive years using a small open economy DSGE model. Using Bayesian analysis, we find UK monetary policy, nominal price rigidity and exogenous shocks, are all subject to regime shifting. A model incorporating these changes is used to estimate the realised monetary policy and derive the optimal monetary policy for the UK. This allows us to assess the effectiveness of the realised policy in terms of stabilising economic fluctuations, and, in turn, provide an indication of whether there is room for monetary authorities to further improve their policies.
Resumo:
We propose a nonlinear heterogeneous panel unit root test for testing the null hypothesis of unit-roots processes against the alternative that allows a proportion of units to be generated by globally stationary ESTAR processes and a remaining non-zero proportion to be generated by unit root processes. The proposed test is simple to implement and accommodates cross sectional dependence. We show that the distribution of the test statistic is free of nuisance parameters as (N, T) −! 1. Monte Carlo simulation shows that our test holds correct size and under the hypothesis that data are generated by globally stationary ESTAR processes has a better power than the recent test proposed in Pesaran [2007]. Various applications are provided.
Resumo:
This paper presents a DSGE model in which long run inflation risk matters for social welfare. Aggregate and welfare effects of long run inflation risk are assessed under two monetary regimes: inflation targeting (IT) and price-level targeting (PT). These effects differ because IT implies base-level drift in the price level, while PT makes the price level stationary around a target price path. Under IT, the welfare cost of long run inflation risk is equal to 0.35 percent of aggregate consumption. Under PT, where long run inflation risk is largely eliminated, it is lowered to only 0.01 per cent. There are welfare gains from PT because it raises average consumption for the young and lowers consumption risk substantially for the old. These results are strongly robust to changes in the PT target horizon and fairly robust to imperfect credibility, fiscal policy, and model calibration. While the distributional effects of an unexpected transition to PT are sizeable, they are short-lived and not welfare-reducing.
Resumo:
This paper examines the performance of monetary policy under the new framework established in 1997 up to the end of the Labour government in May 2010. Performance was relatively good in the years before the crisis, but much weaker from 2008. The new framework largely neglected open economy issues, while the Treasury’s EMU assessment in 2003 can be interpreted in different ways. inflation targeting in the UK and elsewhere may have contributed in some way to the eruption and depth of the financial crisis from 2008, but UK monetary policy responded in a bold and innovative way. Overall, the design and operation of monetary policy were much better than in earlier periods, but there remains scope for significant further evolution.
Resumo:
This paper proposes a new methodology, the Domination Index, to evaluate non-income inequalities between social groups such as inequalities of educational attainment, occupational status, health or subjective well-being. The Domination Index does not require specific cardinalisation assumptions, but only uses the ordinal structure of these non-income variables. We approach from an axiomatic perspective and show that a set of desirable properties for a group inequality measure when the variable of interest is ordinal, characterizes the Domination Index up to a positive scalar transformation. Moreover we make use of the Domination Index to explore the relation between inequality and segregation and show how these two concepts are related theoretically.
Resumo:
Most of the literature estimating DSGE models for monetary policy analysis assume that policy follows a simple rule. In this paper we allow policy to be described by various forms of optimal policy - commitment, discretion and quasi-commitment. We find that, even after allowing for Markov switching in shock variances, the inflation target and/or rule parameters, the data preferred description of policy is that the US Fed operates under discretion with a marked increase in conservatism after the 1970s. Parameter estimates are similar to those obtained under simple rules, except that the degree of habits is significantly lower and the prevalence of cost-push shocks greater. Moreover, we find that the greatest welfare gains from the ‘Great Moderation’ arose from the reduction in the variances in shocks hitting the economy, rather than increased inflation aversion. However, much of the high inflation of the 1970s could have been avoided had policy makers been able to commit, even without adopting stronger anti-inflation objectives. More recently the Fed appears to have temporarily relaxed policy following the 1987 stock market crash, and has lost, without regaining, its post-Volcker conservatism following the bursting of the dot-com bubble in 2000.
Resumo:
This paper revisits the argument that the stabilisation bias that arises under discretionary monetary policy can be reduced if policy is delegated to a policymaker with redesigned objectives. We study four delegation schemes: price level targeting, interest rate smoothing, speed limits and straight conservatism. These can all increase social welfare in models with a unique discretionary equilibrium. We investigate how these schemes perform in a model with capital accumulation where uniqueness does not necessarily apply. We discuss how multiplicity arises and demonstrate that no delegation scheme is able to eliminate all potential bad equilibria. Price level targeting has two interesting features. It can create a new equilibrium that is welfare dominated, but it can also alter equilibrium stability properties and make coordination on the best equilibrium more likely.
Resumo:
This paper studies the behavior of a central bank that seeks to conduct policy optimally while having imperfect credibility and harboring doubts about its model. Taking the Smets-Wouters model as the central bank.s approximating model, the paper's main findings are as follows. First, a central bank.s credibility can have large consequences for how policy responds to shocks. Second, central banks that have low credibility can bene.t from a desire for robustness because this desire motivates the central bank to follow through on policy announcements that would otherwise not be time-consistent. Third, even relatively small departures from perfect credibility can produce important declines in policy performance. Finally, as a technical contribution, the paper develops a numerical procedure to solve the decision-problem facing an imperfectly credible policymaker that seeks robustness.
Resumo:
We defi ne a solution concept, perfectly contracted equilibrium, for an intertemporal exchange economy where agents are simultaneously price takers in spot commodity markets while engaging in non-Walrasian contracting over future prices. In a setting with subjective uncertainty over future prices, we show that perfectly contracted equi- librium outcomes are a subset of Pareto optimal allocations. It is a robust possibility for perfectly contracted equilibrium outcomes to di er from Arrow-Debreu equilibrium outcomes. We show that both centralized banking and retrading with bilateral contracting can lead to perfectly contracted equilibria.