900 resultados para new keynesian models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regional development could present different strategies: •Relocation of industry clusters •Foreign Direct Investment attraction •Innovation based on new business models The Regional Government of Madrid (3rd largest GDP in the EU) selected strategic industries to compete & innovate: •Travel & Transportation •Aerospace •Nanotech. & •Biotech. •ICTs. •Energy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we empirically examine how professional service firms are adapting their promotion and career models to new market and institutional pressures, without losing the benefits of the traditional up-or-out tournament. Based on an in-depth qualitative study of 10 large UK based law firms we find that most of these firms do not have a formal up-or-out policy but that the up-or-out rule operates in practice. We also find that most firms have introduced alternative roles and a novel career policy that offers a holistic learning and development deal to associates without any expectation that unsuccessful candidates for promotion to partner should quit the firm. While this policy and the new roles formally contradict the principle of up-or-out by creating permanent non-partner positions, in practice they coexist. We conclude that the motivational power of the up-or-out tournament remains intact, notwithstanding the changes to the internal labour market structure of these professional service firms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New Keynesian models rely heavily on two workhorse models of nominal inertia - price contracts of random duration (Calvo, 1983) and price adjustment costs (Rotemberg, 1982) - to generate a meaningful role for monetary policy. These alternative descriptions of price stickiness are often used interchangeably since, to a first order of approximation they imply an isomorphic Phillips curve and, if the steady-state is efficient, identical objectives for the policy maker and as a result in an LQ framework, the same policy conclusions. In this paper we compute time-consistent optimal monetary policy in bench-mark New Keynesian models containing each form of price stickiness. Using global solution techniques we find that the inflation bias problem under Calvo contracts is significantly greater than under Rotemberg pricing, despite the fact that the former typically significant exhibits far greater welfare costs of inflation. The rates of inflation observed under this policy are non-trivial and suggest that the model can comfortably generate the rates of inflation at which the problematic issues highlighted in the trend inflation literature emerge, as well as the movements in trend inflation emphasized in empirical studies of the evolution of inflation. Finally, we consider the response to cost push shocks across both models and find these can also be significantly different. The choice of which form of nominal inertia to adopt is not innocuous.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe some of the main features of the recent vintage macroeconomic models used for monetary policy evaluation. We point to some of the key differences with respect to the earlier generation ofmacro models, and highlight the insights for policy that these new frameworks have to offer. Our discussion emphasizes two key aspects of the new models: the significant role of expectations of future policy actions in the monetary transmission mechanism, and the importance for the central bank of tracking of the flexible price equilibrium values of the natural levels of output and the real interest rate. We argue that both features have important implications for the conduct of monetary policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop and estimate a structural model of inflation that allowsfor a fraction of firms that use a backward looking rule to setprices. The model nests the purely forward looking New KeynesianPhillips curve as a particular case. We use measures of marginalcosts as the relevant determinant of inflation, as the theorysuggests, instead of an ad-hoc output gap. Real marginal costsare a significant and quantitatively important determinant ofinflation. Backward looking price setting, while statisticallysignificant, is not quantitatively important. Thus, we concludethat the New Keynesian Phillips curve provides a good firstapproximation to the dynamics of inflation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the behavior of fiscal multipliers in two different economic environments: complete markets and incomplete markets. Based on steady state analysis, output multipliers are found within a range between 0.49 and 0.66, when the markets are complete. Under incomplete markets, output multiplier was found in an interval between 0.75 and 0.94. These results indicates that the market structure, which reflects the degree of risk sharing and the intensity of the precautionary motive faced by individuals, plays a key role in determining the fiscal multipliers. In the second part of the paper, was performed an exercise to analyze the dynamic response of macroeconomic aggregates to an exogenous and unexpected rise in government spending financed by lump-sum taxes. In this case, impact output multipliers varies in a range between 0.64 and 0.68, under complete markets, and within 1.05 and 1.20 when markets are incomplete. The results found under incomplete markets are very close to that found on related literature which usually uses an econometric approach or calibrated/estimated New Keynesian models. These results shows that taking into account the deficiencies in the insurance mechanisms can be an interesting way to reconcile theoretical models with the results found on related current literature, without the need of ad-hoc assumptions relative to price stickness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Muitos trabalhos têm sido elaborados a respeito da curva de demanda agregada brasileira, a curva IS, desde a implementação do Plano Real e, principalmente, após a adoção do regime de câmbio flutuante. Este trabalho tem como objetivo estimar algumas especificações para a curva IS brasileira, para o período após a implementação do câmbio flutuante, do regime de metas de inflação e da Lei de Responsabilidade Fiscal, i.e. após o ano 2000. As especificações para as curvas estimadas tiveram como base o modelo novo-keynesiano, tendo sido incluídas algumas variáveis explicativas buscando captar o efeito na demanda agregada da maior intermediação financeira na potência da política monetária e o efeito do esforço fiscal feito pelo governo brasileiro. O trabalho utiliza o Método dos Momentos Generalizados (MMG) para estimar a curva IS em sua especificação foward-looking e o Método dos Mínimos Quadrados Ordinários (MQO) para estimar a curva IS em sua versão backward-looking. Os resultados mostram forte significância para o hiato do produto em todas as especificações. As especificações foward-looking mostram coeficientes significantes, porém com sinais opostos ao esperado para os juros e superávit primário. Nas regressões backward-looking o sinal dos coeficientes encontrados são os esperados, porém, mostram-se não significantes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there has been increasing concern about the identification of parameters in dynamic stochastic general equilibrium (DSGE) models. Given the structure of DSGE models it may be difficult to determine whether a parameter is identified. For the researcher using Bayesian methods, a lack of identification may not be evident since the posterior of a parameter of interest may differ from its prior even if the parameter is unidentified. We show that this can even be the case even if the priors assumed on the structural parameters are independent. We suggest two Bayesian identification indicators that do not suffer from this difficulty and are relatively easy to compute. The first applies to DSGE models where the parameters can be partitioned into those that are known to be identified and the rest where it is not known whether they are identified. In such cases the marginal posterior of an unidentified parameter will equal the posterior expectation of the prior for that parameter conditional on the identified parameters. The second indicator is more generally applicable and considers the rate at which the posterior precision gets updated as the sample size (T) is increased. For identified parameters the posterior precision rises with T, whilst for an unidentified parameter its posterior precision may be updated but its rate of update will be slower than T. This result assumes that the identified parameters are pT-consistent, but similar differential rates of updates for identified and unidentified parameters can be established in the case of super consistent estimators. These results are illustrated by means of simple DSGE models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In a previous study, the European Organisation for Research and Treatment of Cancer (EORTC) reported a scoring system to predict survival of patients with low-grade gliomas (LGGs). A major issue in the diagnosis of brain tumors is the lack of agreement among pathologists. New models in patients with LGGs diagnosed by central pathology review are needed. Methods Data from 339 EORTC patients with LGGs diagnosed by central pathology review were used to develop new prognostic models for progression-free survival (PFS) and overall survival (OS). Data from 450 patients with centrally diagnosed LGGs recruited into 2 large studies conducted by North American cooperative groups were used to validate the models. Results Both PFS and OS were negatively influenced by the presence of baseline neurological deficits, a shorter time since first symptoms (<30 wk), an astrocytic tumor type, and tumors larger than 5 cm in diameter. Early irradiation improved PFS but not OS. Three risk groups have been identified (low, intermediate, and high) and validated. Conclusions We have developed new prognostic models in a more homogeneous LGG population diagnosed by central pathology review. This population better fits with modern practice, where patients are enrolled in clinical trials based on central or panel pathology review. We could validate the models in a large, external, and independent dataset. The models can divide LGG patients into 3 risk groups and provide reliable individual survival predictions. Inclusion of other clinical and molecular factors might still improve models' predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Expectations about the future are central for determination of current macroeconomic outcomes and the formulation of monetary policy. Recent literature has explored ways for supplementing the benchmark of rational expectations with explicit models of expectations formation that rely on econometric learning. Some apparently natural policy rules turn out to imply expectational instability of private agents’ learning. We use the standard New Keynesian model to illustrate this problem and survey the key results about interest-rate rules that deliver both uniqueness and stability of equilibrium under econometric learning. We then consider some practical concerns such as measurement errors in private expectations, observability of variables and learning of structural parameters required for policy. We also discuss some recent applications including policy design under perpetual learning, estimated models with learning, recurrent hyperinflations, and macroeconomic policy to combat liquidity traps and deflation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent work on optimal policy in sticky price models suggests that demand management through fiscal policy adds little to optimal monetary policy. We explore this consensus assignment in an economy subject to ‘deep’ habits at the level of individual goods where the counter-cyclicality of mark-ups this implies can result in government spending crowding-in private consumption in the short run. We explore the robustness of this mechanism to the existence of price discrimination in the supply of goods to the public and private sectors. We then describe optimal monetary and fiscal policy in our New Keynesian economy subject to the additional externality of deep habits and explore the ability of simple (but potentially nonlinear) policy rules to mimic fully optimal policy.