14 resultados para Random parameter Logit Model
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identi fied uniquely by stochastic choice data.
Resumo:
We study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a 'consideration set' prior to choosing an alternative. Membership of the consideration set is determined both by the alternative specific salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between 'revealed' preferences and 'true' preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.
Resumo:
This paper investigates the usefulness of switching Gaussian state space models as a tool for implementing dynamic model selecting (DMS) or averaging (DMA) in time-varying parameter regression models. DMS methods allow for model switching, where a different model can be chosen at each point in time. Thus, they allow for the explanatory variables in the time-varying parameter regression model to change over time. DMA will carry out model averaging in a time-varying manner. We compare our exact approach to DMA/DMS to a popular existing procedure which relies on the use of forgetting factor approximations. In an application, we use DMS to select different predictors in an in ation forecasting application. We also compare different ways of implementing DMA/DMS and investigate whether they lead to similar results.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
Agents have two forecasting models, one consistent with the unique rational expectations equilibrium, another that assumes a time-varying parameter structure. When agents use Bayesian updating to choose between models in a self-referential system, we find that learning dynamics lead to selection of one of the two models. However, there are parameter regions for which the non-rational forecasting model is selected in the long-run. A key structural parameter governing outcomes measures the degree of expectations feedback in Muth's model of price determination.
Resumo:
This paper introduces a new model of trend (or underlying) inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. The bounds placed on trend inflation mean that standard econometric methods for estimating linear Gaussian state space models cannot be used and we develop a posterior simulation algorithm for estimating the bounded trend inflation model. In an empirical exercise with CPI inflation we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
Recent work on optimal monetary and fiscal policy in New Keynesian models suggests that it is optimal to allow steady-state debt to follow a random walk. Leith and Wren-Lewis (2012) consider the nature of the timeinconsistency involved in such a policy and its implication for discretionary policy-making. We show that governments are tempted, given inflationary expectations, to utilize their monetary and fiscal instruments in the initial period to change the ultimate debt burden they need to service. We demonstrate that this temptation is only eliminated if following shocks, the new steady-state debt is equal to the original (efficient) debt level even though there is no explicit debt target in the government’s objective function. Analytically and in a series of numerical simulations we show which instrument is used to stabilize the debt depends crucially on the degree of nominal inertia and the size of the debt-stock. We also show that the welfare consequences of introducing debt are negligible for precommitment policies, but can be significant for discretionary policy. Finally, we assess the credibility of commitment policy by considering a quasi-commitment policy which allows for different probabilities of reneging on past promises. This on-line Appendix extends the results of this paper.
Resumo:
Recent work on optimal monetary and fiscal policy in New Keynesian models suggests that it is optimal to allow steady-state debt to follow a random walk. Leith and Wren-Lewis (2012) consider the nature of the timeinconsistency involved in such a policy and its implication for discretionary policy-making. We show that governments are tempted, given inflationary expectations, to utilize their monetary and fiscal instruments in the initial period to change the ultimate debt burden they need to service. We demonstrate that this temptation is only eliminated if following shocks, the new steady-state debt is equal to the original (efficient) debt level even though there is no explicit debt target in the government’s objective function. Analytically and in a series of numerical simulations we show which instrument is used to stabilize the debt depends crucially on the degree of nominal inertia and the size of the debt-stock. We also show that the welfare consequences of introducing debt are negligible for precommitment policies, but can be significant for discretionary policy. Finally, we assess the credibility of commitment policy by considering a quasi-commitment policy which allows for different probabilities of reneging on past promises. This on-line Appendix extends the results of this paper.
Resumo:
We model the choice behaviour of an agent who suffers from imperfect attention. We define inattention axiomatically through preference over menus and endowed alternatives: an agent is inattentive if it is better to be endowed with an alternative a than to be allowed to pick a from a menu in which a is is the best alternative. This property and vNM rationality on the domain of menus and alternatives imply that the agent notices each alternative with a given menu-dependent probability (attention parameter) and maximises a menu independent utility function over the alternatives he notices. Preference for flexibility restricts the model to menu independent attention parameters as in Manzini and Mariotti [19]. Our theory explains anomalies (e.g. the attraction and compromise effect) that the Random Utility Model cannot accommodate.
Resumo:
An expanding literature articulates the view that Taylor rules are helpful in predicting exchange rates. In a changing world however, Taylor rule parameters may be subject to structural instabilities, for example during the Global Financial Crisis. This paper forecasts exchange rates using such Taylor rules with Time Varying Parameters (TVP) estimated by Bayesian methods. In core out-of-sample results, we improve upon a random walk benchmark for at least half, and for as many as eight out of ten, of the currencies considered. This contrasts with a constant parameter Taylor rule model that yields a more limited improvement upon the benchmark. In further results, Purchasing Power Parity and Uncovered Interest Rate Parity TVP models beat a random walk benchmark, implying our methods have some generality in exchange rate prediction.
Resumo:
This paper employs an unobserved component model that incorporates a set of economic fundamentals to obtain the Euro-Dollar permanent equilibrium exchange rates (PEER) for the period 1975Q1 to 2008Q4. The results show that for most of the sample period, the Euro-Dollar exchange rate closely followed the values implied by the PEER. The only significant deviations from the PEER occurred in the years immediately before and after the introduction of the single European currency. The forecasting exercise shows that incorporating economic fundamentals provides a better long-run exchange rate forecasting performance than a random walk process.
Resumo:
We analyse the role of time-variation in coefficients and other sources of uncertainty in exchange rate forecasting regressions. Our techniques incorporate the notion that the relevant set of predictors and their corresponding weights, change over time. We find that predictive models which allow for sudden rather than smooth, changes in coefficients significantly beat the random walk benchmark in out-of-sample forecasting exercise. Using innovative variance decomposition scheme, we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients' variability, as the main factors hindering models' forecasting performance. The uncertainty regarding the choice of the predictor is small.
Resumo:
This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.