1000 resultados para Sire model
Resumo:
This paper attempts to extend existing models of political agency to an environment in which voting may be divided between informed and instrumental, informed and ‘expressive’ (Brennan and Lomasky (1993)) and uninformed due to ‘rational irrationality’ (Caplan (2007)). It constructs a model where politicians may be good, bad or populist. Populists are more willing than good politicians to pander to voters who may choose inferior policies in a large-group electoral setting because their vote is insignificant compared with those that voters would choose were their vote decisive in determining the electoral outcome. Bad politicians would ideally like to extract tax revenue for their own ends. Initially we assume the existence of only good and populist politicians. The paper investigates the incentives for good politicians to pool with or separate from populists and focuses on three key issues – (1) how far the majority of voter’s preferences are from those held by the better informed incumbent politician (2) the extent to which the population exhibits rational irrationality and expressiveness (jointly labelled as emotional) and (3) the cost involved in persuading uninformed voters to change their views in terms of composing messages and spreading them. This paper goes on to consider how the inclusion of bad politicians may affect the behaviour of good politicians and suggests that a small amount of potential corruption may be socially useful. It is also argued that where bad politicians have an incentive to mimic the behaviour of good and populist politicians, the latter types of politician may have an incentive to separate from bad politicians by investing in costly public education signals. The paper also discusses the implications of the model for whether fiscal restraints should be soft or hard.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
We introduce duration dependent skill decay among the unemployed into a New-Keynesian model with hiring frictions developed by Blanchard/Gali (2008). If the central bank responds only to (current, lagged or expected future) inflation and quarterly skill decay is above a threshold level, determinacy requires a coefficient on inflation smaller than one. The threshold level is plausible with little steady-state hiring and firing ("Continental European Calibration") but implausibly high in the opposite case ("American calibration"). Neither interest rate smoothing nor responding to the output gap helps to restore determinacy if skill decay exceeds the threshold level. However, a modest response to unemployment guarantees determinacy. Moreover, under indeterminacy, both an adverse sunspot shock and an adverse technology shock increase unemployment extremely persistently.
Resumo:
Hong Kong’s currency is pegged to the US dollar in a currency board arrangement. In autumn 2003, the Hong Kong dollar appreciated from close to 7.80 per US dollar to 7.70, as investors feared that the currency board would be abandoned. In the wake of this appreciation, the monetary authorities revamped the one-sided currency board mechanism into a symmetric two-sided system with a narrow exchange rate band. This paper reviews the characteristics of the new currency board arrangement and embeds a theoretical soft edge target zone model typifying many intermediate regimes, to explain the notable achievement of speculative peace and credibility since May 2005.
Resumo:
National inflation rates reflect domestic and international (regional and global) influences. The relative importance of these components remains a controversial empirical issue. We extend the literature on inflation co-movement by utilising a dynamic factor model with stochastic volatility to account for shifts in the variance of inflation and endogenously determined regional groupings. We find that most of inflation variability is explained by the country specific disturbance term. Nevertheless, the contribution of the global component in explaining industrialised countries’ inflation rates has increased over time.
Resumo:
Paper delivered at the Western Regional Science Association Annual Conference, Sedona, Arizona, February, 2010.
Resumo:
Less is known about social welfare objectives when it is costly to change prices, as in Rotemberg (1982), compared with Calvo-type models. We derive a quadratic approximate welfare function around a distorted steady state for the costly price adjustment model. We highlight the similarities and differences to the Calvo setup. Both models imply inflation and output stabilization goals. It is explained why the degree of distortion in the economy influences inflation aversion in the Rotemberg framework in a way that differs from the Calvo setup.
Resumo:
This paper demonstrates that an asset pricing model with least-squares learning can lead to bubbles and crashes as endogenous responses to the fundamentals driving asset prices. When agents are risk-averse they need to make forecasts of the conditional variance of a stock’s return. Recursive updating of both the conditional variance and the expected return implies several mechanisms through which learning impacts stock prices. Extended periods of excess volatility, bubbles and crashes arise with a frequency that depends on the extent to which past data is discounted. A central role is played by changes over time in agents’ estimates of risk.
Resumo:
Robust decision making implies welfare costs or robustness premia when the approximating model is the true data generating process. To examine the importance of these premia at the aggregate level we employ a simple two-sector dynamic general equilibrium model with human capital and introduce an additional form of precautionary behavior. The latter arises from the robust decision maker s ability to reduce the effects of model misspecification through allocating time and existing human capital to this end. We find that the extent of the robustness premia critically depends on the productivity of time relative to that of human capital. When the relative efficiency of time is low, despite transitory welfare costs, there are gains from following robust policies in the long-run. In contrast, high relative productivity of time implies misallocation costs that remain even in the long-run. Finally, depending on the technology used to reduce model uncertainty, we fi nd that while increasing the fear of model misspecfi cation leads to a net increase in precautionary behavior, investment and output can fall.
Resumo:
Untreated wastewater being directly discharged into rivers is a very harmful environmental hazard that needs to be tackled urgently in many countries. In order to safeguard the river ecosystem and reduce water pollution, it is important to have an effluent charge policy that promotes the investment of wastewater treatment technology by domestic firms. This paper considers the strategic interaction between the government and the domestic firms regarding the investment in the wastewater treatment technology and the design of optimal effluent charge policy that should be implemented. In this model, the higher is the proportion of non-investing firms, the higher would be the probability of having to incur an effluent charge and the higher would be that charge. On one hand the government needs to impose a sufficiently strict policy to ensure that firms have strong incentive to invest. On the other hand, it cannot be too strict that it drives out firms which cannot afford to invest in such expensive technology. The paper analyses the factors that affect the probability of investment in this technology. It also explains the difficulty of imposing a strict environment policy in countries that have too many small firms which cannot afford to invest unless subsidised.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
In Evans, Guse, and Honkapohja (2008) the intended steady state is locally but not globally stable under adaptive learning, and unstable deflationary paths can arise after large pessimistic shocks to expectations. In the current paper a modified model is presented that includes a locally stable stagnation regime as a possible outcome arising from large expectation shocks. Policy implications are examined. Sufficiently large temporary increases in government spending can dislodge the economy from the stagnation regime and restore the natural stabilizing dynamics. More specific policy proposals are presented and discussed.
Resumo:
This paper investigates underlying changes in the UK economy over the past thirtyfive years using a small open economy DSGE model. Using Bayesian analysis, we find UK monetary policy, nominal price rigidity and exogenous shocks, are all subject to regime shifting. A model incorporating these changes is used to estimate the realised monetary policy and derive the optimal monetary policy for the UK. This allows us to assess the effectiveness of the realised policy in terms of stabilising economic fluctuations, and, in turn, provide an indication of whether there is room for monetary authorities to further improve their policies.
Resumo:
This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very exible and can be easily adapted to analyze any of the di¤erent priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.