11 resultados para variables search
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper assesses the impact of official central bank interventions (CBIs) on exchange rate returns, their volatility and bilateral correlations. By exploiting the recent publication of intervention data by the Bank of England, this study is able to investigate fficial interventions by a total number of four central banks, while the previous studies have been limited to three (the Federal Reserve, Bundesbank and Bank of Japan). The results of the existing literature are reappraised and refined. In particular, unilateral CBI is found to be more successful than coordinated CBI. The likely implications of these findings are then discussed.
Resumo:
Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. In this paper, we draw on methods from the Bayesian clustering literature to develop an econometric methodology which: i) finds groups of variables which have the same number of breaks; and ii) determines the nature of the break process within each group. We present an application involving a five-variate steady-state VAR.
Resumo:
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases, factor methods have been traditionally used but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic data set containing 168 variables. We nd that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Our empirical results show the importance of using forecast metrics which use the entire predictive density, instead of using only point forecasts.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
This paper shows how one of the developers of QWERTY continued to use the trade secret that underlay its development to seek further efficiency improvements after its introduction. It provides further evidence that this was the principle used to design QWERTY in the first place and adds further weight to arguments that QWERTY itself was a consequence of creative design and an integral part of a highly efficient system rather than an accident of history. This further serves to raise questions over QWERTY's forced servitude as 'paradigm case' of inferior standard in the path dependence literature. The paper also shows how complementarities in forms of intellectual property rights protection played integral roles in the development of QWERTY and the search for improvements on it, and also helped effectively conceal the source of the efficiency advantages that QWERTY helped deliver.
Resumo:
We consider a frictional two-sided matching market in which one side uses public cheap talk announcements so as to attract the other side. We show that if the first-price auction is adopted as the trading protocol, then cheap talk can be perfectly informative, and the resulting market outcome is efficient, constrained only by search frictions. We also show that the performance of an alternative trading protocol in the cheap-talk environment depends on the level of price dispersion generated by the protocol: If a trading protocol compresses (spreads) the distribution of prices relative to the first-price auction, then an efficient fully revealing equilibrium always (never) exists. Our results identify the settings in which cheap talk can serve as an efficient competitive instrument, in the sense that the central insights from the literature on competing auctions and competitive search continue to hold unaltered even without ex ante price commitment.
Resumo:
Vector Autoregressive Moving Average (VARMA) models have many theoretical properties which should make them popular among empirical macroeconomists. However, they are rarely used in practice due to over-parameterization concerns, difficulties in ensuring identification and computational challenges. With the growing interest in multivariate time series models of high dimension, these problems with VARMAs become even more acute, accounting for the dominance of VARs in this field. In this paper, we develop a Bayesian approach for inference in VARMAs which surmounts these problems. It jointly ensures identification and parsimony in the context of an efficient Markov chain Monte Carlo (MCMC) algorithm. We use this approach in a macroeconomic application involving up to twelve dependent variables. We find our algorithm to work successfully and provide insights beyond those provided by VARs.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
We develop a life-cycle model of the labor market in which different worker-firm matches have different quality and the assignment of the right workers to the right firms is time consuming because of search and learning frictions. The rate at which workers move between unemployment, employment and across different firms is endogenous because search is directed and, hence, workers can choose whether to seek low-wage jobs that are easy to find or high-wage jobs that are hard to find. We calibrate our theory using data on labor market transitions aggregated across workers of different ages. We validate our theory by showing that it predicts quite well the pattern of labor market transitions for workers of different ages. Finally, we use our theory to decompose the age profiles of transition rates, wages and productivity into the effects of age variation in work-life expectancy, human capital and match quality.
Resumo:
This paper evaluates the effects of policy interventions on sectoral labour markets and the aggregate economy in a business cycle model with search and matching frictions. We extend the canonical model by including capital-skill complementarity in production, labour markets with skilled and unskilled workers and on-the-job-learning (OJL) within and across skill types. We first find that, the model does a good job at matching the cyclical properties of sectoral employment and the wage-skill premium. We next find that vacancy subsidies for skilled and unskilled jobs lead to output multipliers which are greater than unity with OJL and less than unity without OJL. In contrast, the positive output effects from cutting skilled and unskilled income taxes are close to zero. Finally, we find that the sectoral and aggregate effects of vacancy subsidies do not depend on whether they are financed via public debt or distorting taxes.