882 resultados para Monetary policy measure
Resumo:
We develop a two-sector economy where each sector is classified as classical/Keynesian (contract/noncontract) in the labor market and traded/nontraded in the product market. We consider the effects of changes in monetary and exchange rate policy on sectoral and aggregate prices and outputs for different sectoral characterizations. Duca (1987) shows that nominal wage rigidity facilitates the effectiveness of monetary policy even in the classical sector. We demonstrate that trade price rigidity provides a similar path for the effectiveness of monetary policy, in this case, even when both sectors are classical.
Resumo:
This paper provides a case study to characterize the monetary policy regime in Malaysia, from a medium- and long-term perspective. Specifically, we ask how the central bank of Malaysia, Bank Negara Malaysia (BNM), has structured its monetary policy regime, and how it has conducted monetary and exchange rate policy under the regime. By conducting three empirical analyses, we characterize the monetary and exchange rate policy regime in Malaysia by three intermediate solutions on three vectors: the degree of autonomy in monetary policy, the degree of variability of the exchange rate, and the degree of capital mobility.
Resumo:
This paper first takes a step backwards with an attempt to situate the recent adoption of the Treaty on Stability, Coordination and Governance in the Economic and Monetary Union in the context of discussions on the Stability and Growth Pact (SGP) and the ‘Maastricht criteria’, as fixed in the Maastricht Treaty for membership in the Economic and Monetary Union (EMU) in a longer perspective of the sharing of competences for macroeconomic policy-making within the EU. It then presents the main features of the new so-called ‘Fiscal Compact’ and its relationship to the SGP and draws some conclusions as regards the importance and relevance of this new step in the process of economic policy coordination. It concludes that the Treaty on Stability, Coordination and Governance in the Economic and Monetary Union does not seem to offer a definitive solution to the problem of finding the appropriate budgetary-monetary policy mix in EMU, which was already well identified in the Delors report in 1989 and regularly emphasised ever since and is now seriously aggravated due to the crisis in the eurozone. Furthermore, implementation of this Treaty may under certain circumstances contribute to an increase in the uncertainties as regards the distribution of the competences between the European Parliament and national parliaments and between the former and the Commission and the Council.
Resumo:
Highlights • Low interest rates, asset purchases and other accommodative monetary policy measures tend to increase asset prices and thereby benefit the wealthier segments of society, at least in the short-term, given that asset holdings are mainly concentrated among richest households. • Such policies also support employment, economic activity, incomes and inflation, which can benefit the poor and middle-class, which have incomes more dependent on employment and which tend to spend a large share of their income on debt service. • Monetary policy should focus on its mandate, while fiscal and social policies should address widening inequalities by revising the national social redistribution systems for improved efficiency, intergenerational equity and fair burden sharing between the wealthy and poor.
Resumo:
The German Constitutional Court (BVG) recently referred different questions to the European Court of Justice for a preliminary ruling. They concern the legality of the European Central Bank’s Outright Monetary Transaction mechanism created in 2012. Simultaneously, the German Court has threatened to disrupt the implementation of OTM in Germany if its very restrictive analysis is not validated by the European Court of Justice. This raises fundamental questions about the future efficiency of the ECB’s monetary policy, the damage to the independence of the ECB, the balance of power between judges and political organs in charge of economic policy, in Germany and in Europe, and finally the relationship between the BVG and other national or European courts.
Resumo:
Central banks in the developed world are being misled into fighting the perceived dangers of a ‘deflationary spiral’ because they are looking at only one indicator: consumer prices. This Policy Brief finds that while consumer prices are flat, broader price indices do not show any sign of impending deflation: the GDP deflator is increasing in the US, Japan and the euro area by about 1.2-1.5%. Nor is the real economy sending any deflationary signals either: unemployment is at record lows in the US and Japan, and is declining in the euro area while GDP growth is at, or above potential. Thus, the overall macroeconomic situation does not give any indication of an imminent deflationary spiral. In today’s high-debt environment, the authors argue that central banks should be looking at the GDP deflator and the growth of nominal GDP, instead of CPI inflation. Nominal GDP growth, as forecasted by the major official institutions, remains robust and is in excess of nominal interest rates. They conclude that if the ECB were to set the interest rate according to the standard rules of thumb for monetary policy, which take into account both the real economy and price developments of broader price indicators, it would start normalising its policy now, instead of pondering over additional measures to fight deflation, which does not exist. In short, economic conditions are slowly normalising; so should monetary policy.
Resumo:
This study compares monetary and multidimensional poverty measures for the Lao People’s Democratic Republic. Using household data of 2007/2008, we compare the empirical outcomes of the country’s current official monetary poverty measure with those of a multidimensional poverty measure. We analyze which population subgroups are identified as poor by both measures and thus belong to the category of the poorest of the poor; and we look at which subgroups are identified as poor by only one of the measures and belong either to the category of the income-poor (identified as poor only by the monetary measure) or to that of the overlooked poor (identified as poor only by the multidimensional poverty measure). Furthermore, we examined drivers of these differences using a multinomial regression model and found that monetary poverty does not capture the multiple deprivations of ethnic minorities, who are only identified as poor when using a multidimensional poverty measure. We conclude that complementing the monetary poverty measure with a multidimensional poverty index would enable more effective targeting of poverty reduction efforts.
Resumo:
The standard Blanchard-Quah (BQ) decomposition forces aggregate demand and supply shocks to be orthogonal. However, this assumption is problematic for a nation with an inflation target. The very notion of inflation targeting means that monetary policy reacts to changes in aggregate supply. This paper employs a modification of the BQ procedure that allows for correlated shifts in aggregate supply and demand. It is found that shocks to Australian aggregate demand and supply are highly correlated. The estimated shifts in the aggregate demand and supply curves are then used to measure the effects of inflation targeting on the Australian inflation rate and level of GDP.
Resumo:
Economic and Monetary Union can be characterised as a complicated set of legislation and institutions governing monetary and fiscal responsibilities. The measures of fiscal responsibility are to be guided by the Stability and Growth Pact, which sets rules for fiscal policy and makes a discretionary fiscal policy virtually impossible. To analyse the effects of the fiscal and monetary policy mix, we modified the New Keynesian framework to allow for supply effects of fiscal policy. We show that defining a supply-side channel for fiscal policy using an endogenous output gap changes the stabilising properties of monetary policy rules. The stability conditions are affected by fiscal policy, so that the dichotomy between active (passive) monetary policy and passive (active) fiscal policy as stabilising regimes does not hold, and it is possible to have an active monetary - active fiscal policy regime consistent with dynamical stability of the economy. We show that, if we take supply-side effects into ac-count, we get more persistent inflation and output reactions. We also show that the dichotomy does not hold for a variety of different fiscal policy rules based on government debt and budget deficit, using the tax smoothing hypothesis and formulating the tax rules as difference equations. The debt rule with active monetary policy results in indeterminacy, while the deficit rule produces a determinate solution with active monetary policy, even with active fiscal policy. The combination of fiscal requirements in a rule results in cyclical responses to shocks. The amplitude of the cycle is larger with more weight on debt than on deficit. Combining optimised monetary policy with fiscal policy rules means that, under a discretionary monetary policy, the fiscal policy regime affects the size of the inflation bias. We also show that commitment to an optimal monetary policy not only corrects the inflation bias but also increases the persistence of output reactions. With fiscal policy rules based on the deficit we can retain the tax smoothing hypothesis also in a sticky price model.
Resumo:
This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.
Resumo:
[EN] The aim of this paper is to study systematic liquidity at the Euronext Lisbon Stock Exchange. The motivation for this research is provided by the growing interest in financial literature about stock liquidity and the implications of commonality in liquidity for asset pricing since it could represent a source of non-diversifiable risk. Namely, it is analysed whether there exist common factors that drive the variation in individual stock liquidity and the causes of the inter-temporal variation of aggregate liquidity. Monthly data for the period between January 1988 and December 2011 is used to compute some of the most used proxies for liquidity: bid-ask spreads, turnover rate, trading volume, proportion of zero returns and the illiquidity ratio. Following Chordia et al. (2000) methodology, some evidence of commonality in liquidity is found in the Portuguese stock market when the proportion of zero returns is used as a measure of liquidity. In relation to the factors that drive the inter-temporal variation of the Portuguese stock market liquidity, the results obtained within a VAR framework suggest that changes in real economy activity, monetary policy (proxied by changes in monetary aggregate M1) and stock market returns play an important role as determinants of commonality in liquidity.
Resumo:
Este trabalho tem como objetivos analisar as semelhanças das respostas dos países da Zona do Euro aos choques na política monetária e no câmbio (identificados através de restrições de sinais nas funções impulso-resposta) e investigar a simetria das flutuações na taxa de crescimento do nível de atividade na região através da análise da importância relativa da resposta do crescimento do PIB destes países aos choques comum e específico identificados pelo modelo FAVAR utilizado, que foi estimado através de um método Bayesiano desenvolvido para incorporar prioris de Litterman (1986). A importância do choque comum (relativamente ao específico) nos diversos países, fornece uma medida do grau de integração dos diversos membros da Zona do Euro. O trabalho contribui para a análise do grau de integração dos países da Zona do Euro ao utilizar uma metodologia que permite o uso de um amplo conjunto de variáveis e ao identificar o grau de simetria das flutuações na taxa de crescimento do nível de atividade dos membros da região através da identificação dos choques comuns e específicos. Foram utilizados dados trimestrais de 1999.I a 2013.I para os 17 países da região. Os resultados encontrados apontam para a existência de uma maior integração entre as grandes economias da Zona do Euro ( com exceção da França) e uma integração menor para as menores economias (com exceção da Finlândia).
Resumo:
A Masters Thesis, presented as part of the requirements for the award of a Research Masters Degree in Economics from NOVA – School of Business and Economics
Resumo:
This paper studies the persistent effects of monetary shocks on output. Previous empirical literature documents this persistence, but standard general equilibrium models with sticky prices fail to generate output responses beyond the duration of nominal contracts. This paper constructs and estimates a general equilibrium model with price rigidities, habit formation, and costly capital adjustment. The model is estimated via Maximum Likelihood using US data on output, the real money stock, and the nominal interest rate. Econometric results suggest that habit formation and adjustment costs to capital play an important role in explaining the output effects of monetary policy. In particular, impulse response analysis indicates that the model generates persistent, hump-shaped output responses to monetary shocks.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.