11 resultados para Adaptative large neighborhood search
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases, factor methods have been traditionally used but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic data set containing 168 variables. We nd that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Our empirical results show the importance of using forecast metrics which use the entire predictive density, instead of using only point forecasts.
Resumo:
Vector Autoregressive Moving Average (VARMA) models have many theoretical properties which should make them popular among empirical macroeconomists. However, they are rarely used in practice due to over-parameterization concerns, difficulties in ensuring identification and computational challenges. With the growing interest in multivariate time series models of high dimension, these problems with VARMAs become even more acute, accounting for the dominance of VARs in this field. In this paper, we develop a Bayesian approach for inference in VARMAs which surmounts these problems. It jointly ensures identification and parsimony in the context of an efficient Markov chain Monte Carlo (MCMC) algorithm. We use this approach in a macroeconomic application involving up to twelve dependent variables. We find our algorithm to work successfully and provide insights beyond those provided by VARs.
Resumo:
We model a market for highly skilled workers, such as the academic job market. The outputs of firm-worker matches are heterogeneous and common knowledge. Wage setting is synchronous with search: firms simultaneously make one personalized o¤er each to the worker of their choice. With large frictions (delay costs), efficient coordination is not possible, but for small frictions efficient matching with Diamond-type monopsony wages is an equilibrium.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
This paper shows how one of the developers of QWERTY continued to use the trade secret that underlay its development to seek further efficiency improvements after its introduction. It provides further evidence that this was the principle used to design QWERTY in the first place and adds further weight to arguments that QWERTY itself was a consequence of creative design and an integral part of a highly efficient system rather than an accident of history. This further serves to raise questions over QWERTY's forced servitude as 'paradigm case' of inferior standard in the path dependence literature. The paper also shows how complementarities in forms of intellectual property rights protection played integral roles in the development of QWERTY and the search for improvements on it, and also helped effectively conceal the source of the efficiency advantages that QWERTY helped deliver.
Resumo:
We consider a frictional two-sided matching market in which one side uses public cheap talk announcements so as to attract the other side. We show that if the first-price auction is adopted as the trading protocol, then cheap talk can be perfectly informative, and the resulting market outcome is efficient, constrained only by search frictions. We also show that the performance of an alternative trading protocol in the cheap-talk environment depends on the level of price dispersion generated by the protocol: If a trading protocol compresses (spreads) the distribution of prices relative to the first-price auction, then an efficient fully revealing equilibrium always (never) exists. Our results identify the settings in which cheap talk can serve as an efficient competitive instrument, in the sense that the central insights from the literature on competing auctions and competitive search continue to hold unaltered even without ex ante price commitment.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
We develop a life-cycle model of the labor market in which different worker-firm matches have different quality and the assignment of the right workers to the right firms is time consuming because of search and learning frictions. The rate at which workers move between unemployment, employment and across different firms is endogenous because search is directed and, hence, workers can choose whether to seek low-wage jobs that are easy to find or high-wage jobs that are hard to find. We calibrate our theory using data on labor market transitions aggregated across workers of different ages. We validate our theory by showing that it predicts quite well the pattern of labor market transitions for workers of different ages. Finally, we use our theory to decompose the age profiles of transition rates, wages and productivity into the effects of age variation in work-life expectancy, human capital and match quality.
Resumo:
This paper evaluates the effects of policy interventions on sectoral labour markets and the aggregate economy in a business cycle model with search and matching frictions. We extend the canonical model by including capital-skill complementarity in production, labour markets with skilled and unskilled workers and on-the-job-learning (OJL) within and across skill types. We first find that, the model does a good job at matching the cyclical properties of sectoral employment and the wage-skill premium. We next find that vacancy subsidies for skilled and unskilled jobs lead to output multipliers which are greater than unity with OJL and less than unity without OJL. In contrast, the positive output effects from cutting skilled and unskilled income taxes are close to zero. Finally, we find that the sectoral and aggregate effects of vacancy subsidies do not depend on whether they are financed via public debt or distorting taxes.