50 resultados para Panel VAR models
Resumo:
Until recently, much effort has been devoted to the estimation of panel data regression models without adequate attention being paid to the drivers of diffusion and interaction across cross section and spatial units. We discuss some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross section and spatial interactions. Specifically, we highlight the important distinction between spatial dependence driven by unobserved common factors and those based on a spatial weights matrix. We argue that, purely factor driven models of spatial dependence may be somewhat inadequate because of their connection with the exchangeability assumption. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper develops stochastic search variable selection (SSVS) for zero-inflated count models which are commonly used in health economics. This allows for either model averaging or model selection in situations with many potential regressors. The proposed techniques are applied to a data set from Germany considering the demand for health care. A package for the free statistical software environment R is provided.
Resumo:
This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP.
Resumo:
The disconnect between rising short and low long interest rates has been a distinctive feature of the 2000s. Both research and policy circles have argued that international forces, such as global monetary policy (e.g. Rogoff, 2006); international business cycles (e.g. Borio and Filardo, 2007); or a global savings glut (e.g Bernanke, 2005) may be responsible. In this paper, we employ recent advances in panel data econometrics to document the disconnect and link it explicitly to the existence of a global latent factor that dominates the long end of the term spread for the recent period; the saving glut story emerges as the most likely contender for the global factor.
Resumo:
We propose an alternative approach to obtaining a permanent equilibrium exchange rate (PEER), based on an unobserved components (UC) model. This approach offers a number of advantages over the conventional cointegration-based PEER. Firstly, we do not rely on the prerequisite that cointegration has to be found between the real exchange rate and macroeconomic fundamentals to obtain non-spurious long-run relationships and the PEER. Secondly, the impact that the permanent and transitory components of the macroeconomic fundamentals have on the real exchange rate can be modelled separately in the UC model. This is important for variables where the long and short-run effects may drive the real exchange rate in opposite directions, such as the relative government expenditure ratio. We also demonstrate that our proposed exchange rate models have good out-of sample forecasting properties. Our approach would be a useful technique for central banks to estimate the equilibrium exchange rate and to forecast the long-run movements of the exchange rate.
Resumo:
The so-called German Dominance Hypothesis (GDH) claimed that Bundesbank policies were transmitted into other European Monetary System (EMS) interest rates during the pre-euro era. We reformulate this hypothesis for the Central and Eastern European (CEE) countries that are on the verge of accessing the eurozone. We test this \Euro Dominance Hypothesis (EDH)" in a novel way using a global vector autoregressive (GVAR) approach that combines country-speci c error correction models in a global system. We nd that euro area monetary policies are transmitted into CEE interest rates which provides evidence for monetary integration between the eurozone and CEE countries. Our framework also allows for introducing global monetary shocks to provide empirical evidence regarding the e ects of the recent nancial crisis on monetary integration in Europe.
Resumo:
The revival of support for a living wage has reopened a long-run debate over the extent to which active regulation of labour markets may be necessary to attain desired outcomes. Market failure is suggested to result in lower wages and remuneration for low skilled workers than might otherwise be expected from models of perfect competition. This paper examines the theoretical underpinning of living wage campaigns and demonstrates that once we move away from idealised models of perfect competition to one where employers retain power over the bargaining process, such as monopsony, it is readily understandable that low wages may be endemic in low skilled employment contracts. The paper then examines evidence, derived from the UK Quarterly Labour Force Survey, for the extent to which a living wage will address low pay within the labour force. We highlight the greater incidence of low pay within the private sector and then focus upon the public sector where the Living Wage demand has had most impact. We examine the extent to which addressing low pay within the public sector increases costs. We further highlight the evidence that a predominance of low pay exists among public sector young and women workers (and in particular lone parent women workers) but not, perhaps surprisingly, among workers from ethnic minority backgrounds. The paper then builds upon the results from the Quarterly Labour Force Survey with analysis of the British Household Panel Survey in order to examine the impact the introduction of a living wage, within the public sector, would have in reducing household inequality. The paper concludes that a living wage is indeed an appropriate regulatory response to market failure for low skilled workers and can act to reduce age and gender pay inequality, and reduce household income inequality among in-work households below average earnings.
Resumo:
This paper addresses the hotly-debated question: do Chinese firms overinvest? A firm-level dataset of 100,000 firms over the period of 2000-07 is employed for this purpose. We initially calculate measures of investment efficiency, which is typically negatively associated with overinvestment. Despite wide disparities across various ownership groups, industries and regions, we find that corporate investment in China has become increasingly efficient over time. However, based on direct measures of overinvestment that we subsequently calculate, we find evidence of overinvestment for all types of firms, even in the most efficient and most profitable private sector. We find that the free cash flow hypothesis provides a good explanation for China‟s overinvestment, especially for the private sector, while in the state sector, overinvestment is attributable to the poor screening and monitoring of enterprises by banks.
Resumo:
Spatial econometrics has been criticized by some economists because some model specifications have been driven by data-analytic considerations rather than having a firm foundation in economic theory. In particular this applies to the so-called W matrix, which is integral to the structure of endogenous and exogenous spatial lags, and to spatial error processes, and which are almost the sine qua non of spatial econometrics. Moreover it has been suggested that the significance of a spatially lagged dependent variable involving W may be misleading, since it may be simply picking up the effects of omitted spatially dependent variables, incorrectly suggesting the existence of a spillover mechanism. In this paper we review the theoretical and empirical rationale for network dependence and spatial externalities as embodied in spatially lagged variables, arguing that failing to acknowledge their presence at least leads to biased inference, can be a cause of inconsistent estimation, and leads to an incorrect understanding of true causal processes.
Resumo:
In recent years there has been increasing concern about the identification of parameters in dynamic stochastic general equilibrium (DSGE) models. Given the structure of DSGE models it may be difficult to determine whether a parameter is identified. For the researcher using Bayesian methods, a lack of identification may not be evident since the posterior of a parameter of interest may differ from its prior even if the parameter is unidentified. We show that this can even be the case even if the priors assumed on the structural parameters are independent. We suggest two Bayesian identification indicators that do not suffer from this difficulty and are relatively easy to compute. The first applies to DSGE models where the parameters can be partitioned into those that are known to be identified and the rest where it is not known whether they are identified. In such cases the marginal posterior of an unidentified parameter will equal the posterior expectation of the prior for that parameter conditional on the identified parameters. The second indicator is more generally applicable and considers the rate at which the posterior precision gets updated as the sample size (T) is increased. For identified parameters the posterior precision rises with T, whilst for an unidentified parameter its posterior precision may be updated but its rate of update will be slower than T. This result assumes that the identified parameters are pT-consistent, but similar differential rates of updates for identified and unidentified parameters can be established in the case of super consistent estimators. These results are illustrated by means of simple DSGE models.
Resumo:
Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. In this paper, we draw on methods from the Bayesian clustering literature to develop an econometric methodology which: i) finds groups of variables which have the same number of breaks; and ii) determines the nature of the break process within each group. We present an application involving a five-variate steady-state VAR.
Resumo:
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.