939 resultados para Central Bank Loss Functions
Resumo:
Les questions abordées dans les deux premiers articles de ma thèse cherchent à comprendre les facteurs économiques qui affectent la structure à terme des taux d'intérêt et la prime de risque. Je construis des modèles non linéaires d'équilibre général en y intégrant des obligations de différentes échéances. Spécifiquement, le premier article a pour objectif de comprendre la relation entre les facteurs macroéconomiques et le niveau de prime de risque dans un cadre Néo-keynésien d'équilibre général avec incertitude. L'incertitude dans le modèle provient de trois sources : les chocs de productivité, les chocs monétaires et les chocs de préférences. Le modèle comporte deux types de rigidités réelles à savoir la formation des habitudes dans les préférences et les coûts d'ajustement du stock de capital. Le modèle est résolu par la méthode des perturbations à l'ordre deux et calibré à l'économie américaine. Puisque la prime de risque est par nature une compensation pour le risque, l'approximation d'ordre deux implique que la prime de risque est une combinaison linéaire des volatilités des trois chocs. Les résultats montrent qu'avec les paramètres calibrés, les chocs réels (productivité et préférences) jouent un rôle plus important dans la détermination du niveau de la prime de risque relativement aux chocs monétaires. Je montre que contrairement aux travaux précédents (dans lesquels le capital de production est fixe), l'effet du paramètre de la formation des habitudes sur la prime de risque dépend du degré des coûts d'ajustement du capital. Lorsque les coûts d'ajustement du capital sont élevés au point que le stock de capital est fixe à l'équilibre, une augmentation du paramètre de formation des habitudes entraine une augmentation de la prime de risque. Par contre, lorsque les agents peuvent librement ajuster le stock de capital sans coûts, l'effet du paramètre de la formation des habitudes sur la prime de risque est négligeable. Ce résultat s'explique par le fait que lorsque le stock de capital peut être ajusté sans coûts, cela ouvre un canal additionnel de lissage de consommation pour les agents. Par conséquent, l'effet de la formation des habitudes sur la prime de risque est amoindri. En outre, les résultats montrent que la façon dont la banque centrale conduit sa politique monétaire a un effet sur la prime de risque. Plus la banque centrale est agressive vis-à-vis de l'inflation, plus la prime de risque diminue et vice versa. Cela est due au fait que lorsque la banque centrale combat l'inflation cela entraine une baisse de la variance de l'inflation. Par suite, la prime de risque due au risque d'inflation diminue. Dans le deuxième article, je fais une extension du premier article en utilisant des préférences récursives de type Epstein -- Zin et en permettant aux volatilités conditionnelles des chocs de varier avec le temps. L'emploi de ce cadre est motivé par deux raisons. D'abord des études récentes (Doh, 2010, Rudebusch and Swanson, 2012) ont montré que ces préférences sont appropriées pour l'analyse du prix des actifs dans les modèles d'équilibre général. Ensuite, l'hétéroscedasticité est une caractéristique courante des données économiques et financières. Cela implique que contrairement au premier article, l'incertitude varie dans le temps. Le cadre dans cet article est donc plus général et plus réaliste que celui du premier article. L'objectif principal de cet article est d'examiner l'impact des chocs de volatilités conditionnelles sur le niveau et la dynamique des taux d'intérêt et de la prime de risque. Puisque la prime de risque est constante a l'approximation d'ordre deux, le modèle est résolu par la méthode des perturbations avec une approximation d'ordre trois. Ainsi on obtient une prime de risque qui varie dans le temps. L'avantage d'introduire des chocs de volatilités conditionnelles est que cela induit des variables d'état supplémentaires qui apportent une contribution additionnelle à la dynamique de la prime de risque. Je montre que l'approximation d'ordre trois implique que les primes de risque ont une représentation de type ARCH-M (Autoregressive Conditional Heteroscedasticty in Mean) comme celui introduit par Engle, Lilien et Robins (1987). La différence est que dans ce modèle les paramètres sont structurels et les volatilités sont des volatilités conditionnelles de chocs économiques et non celles des variables elles-mêmes. J'estime les paramètres du modèle par la méthode des moments simulés (SMM) en utilisant des données de l'économie américaine. Les résultats de l'estimation montrent qu'il y a une évidence de volatilité stochastique dans les trois chocs. De plus, la contribution des volatilités conditionnelles des chocs au niveau et à la dynamique de la prime de risque est significative. En particulier, les effets des volatilités conditionnelles des chocs de productivité et de préférences sont significatifs. La volatilité conditionnelle du choc de productivité contribue positivement aux moyennes et aux écart-types des primes de risque. Ces contributions varient avec la maturité des bonds. La volatilité conditionnelle du choc de préférences quant à elle contribue négativement aux moyennes et positivement aux variances des primes de risque. Quant au choc de volatilité de la politique monétaire, son impact sur les primes de risque est négligeable. Le troisième article (coécrit avec Eric Schaling, Alain Kabundi, révisé et resoumis au journal of Economic Modelling) traite de l'hétérogénéité dans la formation des attentes d'inflation de divers groupes économiques et de leur impact sur la politique monétaire en Afrique du sud. La question principale est d'examiner si différents groupes d'agents économiques forment leurs attentes d'inflation de la même façon et s'ils perçoivent de la même façon la politique monétaire de la banque centrale (South African Reserve Bank). Ainsi on spécifie un modèle de prédiction d'inflation qui nous permet de tester l'arrimage des attentes d'inflation à la bande d'inflation cible (3% - 6%) de la banque centrale. Les données utilisées sont des données d'enquête réalisée par la banque centrale auprès de trois groupes d'agents : les analystes financiers, les firmes et les syndicats. On exploite donc la structure de panel des données pour tester l'hétérogénéité dans les attentes d'inflation et déduire leur perception de la politique monétaire. Les résultats montrent qu'il y a évidence d'hétérogénéité dans la manière dont les différents groupes forment leurs attentes. Les attentes des analystes financiers sont arrimées à la bande d'inflation cible alors que celles des firmes et des syndicats ne sont pas arrimées. En effet, les firmes et les syndicats accordent un poids significatif à l'inflation retardée d'une période et leurs prédictions varient avec l'inflation réalisée (retardée). Ce qui dénote un manque de crédibilité parfaite de la banque centrale au vu de ces agents.
Resumo:
European Union Series
Resumo:
The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.
Resumo:
We present distribution independent bounds on the generalization misclassification performance of a family of kernel classifiers with margin. Support Vector Machine classifiers (SVM) stem out of this class of machines. The bounds are derived through computations of the $V_gamma$ dimension of a family of loss functions where the SVM one belongs to. Bounds that use functions of margin distributions (i.e. functions of the slack variables of SVM) are derived.
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
This paper presents a computation of the $V_gamma$ dimension for regression in bounded subspaces of Reproducing Kernel Hilbert Spaces (RKHS) for the Support Vector Machine (SVM) regression $epsilon$-insensitive loss function, and general $L_p$ loss functions. Finiteness of the RV_gamma$ dimension is shown, which also proves uniform convergence in probability for regression machines in RKHS subspaces that use the $L_epsilon$ or general $L_p$ loss functions. This paper presenta a novel proof of this result also for the case that a bias is added to the functions in the RKHS.
Resumo:
En este trabajo se construye un modelo de Equilibrio General Dinámico Estocástico (DSGE) con sector informal y rigideces en precios, usando como marco de análisis la teoría de búsqueda y emparejamiento del mercado de trabajo. El objetivo principal es analizar el efecto de los diferentes tipos de choques económicos sobre las principales variables del mercado laboral, en una economía con presencia importante del sector informal. Igualmente se estudia el efecto de la política monetaria, ya que la presencia de este sector afecta la dinámica del ciclo económico, y por ende los mecanismos de transmisión de la política monetaria. En particular, se analiza la dinámica del modelo bajo diferentes reglas de política monetaria y se compara el bienestar agente representativo generado por cada una de estas reglas.
Resumo:
En este trabajo se proponen dos tipos de contratos para los préstamos interbancarios con el fin de que los bancos suavicen sus choques de liquidez a través del mercado interbancario. En particular, se estudia la situación en la que los bancos con faltantes de liquidez que tienen bajo riesgo de crédito abandonan el mercado debido a que la tasa de interés es alta en relación a su fuente alterna de financiamiento. La asimetría en la información acerca del riesgo de crédito impide que los bancos con excedentes de liquidez ajusten la tasa de interés considerando el riesgo de su contraparte. Dado lo anterior, se diseñan dos contratos para los créditos interbancarios que se diferencian en las tasas de interés cobradas. Así, siempre que un banco constituya un depósito podrá obtener liquidez a bajas tasas de interés; en la situación contraria la tasa será más elevada.
Resumo:
This study proposes a new method for testing for the presence of momentum in nominal exchange rates, using a probabilistic approach. We illustrate our methodology estimating a binary response model using information on local currency / US dollar exchange rates of eight emerging economies. After controlling for important variables a§ecting the behavior of exchange rates in the short-run, we show evidence of exchange rate inertia; in other words, we Önd that exchange rate momentum is a common feature in this group of emerging economies, and thus foreign exchange traders participating in these markets are able to make excess returns by following technical analysis strategies. We Önd that the presence of momentum is asymmetric, being stronger in moments of currency depreciation than of appreciation. This behavior may be associated with central bank intervention
Resumo:
This paper discusses the creation of a European Banking Union. First, we discuss questions of design. We highlight seven fundamental choices that decision makers will need to make: Which EU countries should participate in the banking union? To which categories of banks should it apply? Which institution should be tasked with supervision? Which one should deal with resolution? How centralised should the deposit insurance system be? What kind of fiscal backing would be required? What governance framework and political institutions would be needed? In terms of geographical scope, we see the coverage of the banking union of the euro area as necessary and of additional countries as desirable, even though this would entail important additional economic difficulties. The system should ideally cover all banks within the countries included, in order to prevent major competitive and distributional distortions. Supervisory authority should be granted either to both the ECB and a new agency, or to a new agency alone. National supervisors, acting under the authority of the European supervisor, would be tasked with the supervision of smaller banks in accordance with the subsidiarity principle. A European resolution authority should be established, with the possibility of drawing on ESM resources. A fully centralized deposit insurance system would eventually be desirable, but a system of partial reinsurance may also be envisaged at least in a first phase. A banking union would require at least implicit European fiscal backing, with significant political authority and legitimacy. Thus, banking union cannot be considered entirely separately from fiscal union and political union. The most difficult challenge of creating a European banking union lies with the short-term steps towards its eventual implementation. Many banks in the euro area, and especially in the crisis countries, are currently under stress and the move towards banking union almost certainly has significant distributional implications. Yet it is precisely because banks are under such stress that early and concrete action is needed. An overarching principle for such action is to minimize the cost to the tax payers. The first step should be to create a European supervisor that will anchor the development of the future banking union. In parallel, a capability to quickly assess the true capital position of the system’s most important banks should be created, for which we suggest establishing a temporary European Banking Sector Task Force working together with the European supervisor and other authorities. Ideally, problems identified by this process should be resolved by national authorities; in case fiscal capacities would prove insufficient, the European level would take over in the country concerned with some national financial participation, or in an even less likely adverse scenario, in all participating countries at once. This approach would require the passing of emergency legislation in the concerned countries that would give the Task Force the required access to information and, if necessary, further intervention rights. Thus, the principle of fiscal responsibility of respective member states for legacy costs would be preserved to the maximum extent possible, and at the same time, market participants and the public would be reassured that adequate tools are in place to address any eventuality.
Resumo:
The euro area summit has managed to surprise the markets once again. By moving banking supervision of the eurozone to the European Central Bank, a huge step towards a more federal banking model has been taken, explains CEPS CEO Karel Lannoo in this new Commentary. But will this move be enough to re-establish confidence, bolster the euro interbank market and further financial integration?
Resumo:
In this Commentary, Daniel Gros applauds the decision taken by Europe’s leaders at the eurozone summit at the end of June to transfer responsibility for banking supervision in the eurozone to the European Central Bank. It represents explicit recognition of the important fact that problems might originate at the national level, but, owing to monetary union, they can quickly threaten the stability of the entire eurozone banking system. In his view, the next small, incremental step, although one not yet officially acknowledged, will necessarily be the creation of a common bank rescue fund.
Resumo:
Arguing that the planned move to put the ECB in charge of banking supervision would be incomplete without a European Deposit Insurance and Resolution Authority (EDIRA), Daniel Gros and Dirk Schoenmaker spell out in a new CEPS Commentary some underlying principles to guide a gradual transition under which only future risks would be shared while past losses would remain at the national level. They show that ultimately such a new institution would serve as a genuine source of confidence in the European banking system.
Resumo:
The proposal to move to a full banking union in the eurozone means a radical regime shift for the EU, since the European Central Bank will supervise the eurozone banks and effectively end ‘home country rule’. But how this is implemented raises a number of questions and needs close monitoring, explains CEPS CEO Karel Lannoo in this new Commentary.
Resumo:
The European Commission has published its proposals for the transfer of supervisory responsibilities to the European Central Bank (ECB),1 under Article 127(6) of the TFEU, providing a comprehensive and courageous ‘first step’ towards a European banking Union, the other steps being European deposit insurance and resolution procedures. However, on a number of issues the Commission’s chosen path raises questions that should be brought out in the open and fully recognized before final deliberation by the Council.