9 resultados para Polynomial distributed lag models

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese é composta por três ensaios sobre testes empíricos de curvas de Phillips, curvas IS e a interação entre as políticas fiscal e monetária. O primeiro ensaio ("Curvas de Phillips: um Teste Abrangente") testa curvas de Phillips usando uma especificação autoregressiva de defasagem distribuída (ADL) que abrange a curva de Phillips Aceleracionista (APC), a curva de Phillips Novo Keynesiana (NKPC), a curva de Phillips Híbrida (HPC) e a curva de Phillips de Informação Rígida (SIPC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2), usando o hiato do produto e alternativamente o custo marginal real como medida de pressão inflacionária. A evidência empírica rejeita as restrições decorrentes da NKPC, da HPC e da SIPC, mas não rejeita aquelas da APC. O segundo ensaio ("Curvas IS: um Teste Abrangente") testa curvas IS usando uma especificação ADL que abrange a curva IS Keynesiana tradicional (KISC), a curva IS Novo Keynesiana (NKISC) e a curva IS Híbrida (HISC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2). A evidência empírica rejeita as restrições decorrentes da NKISC e da HISC, mas não rejeita aquelas da KISC. O terceiro ensaio ("Os Efeitos da Política Fiscal e suas Interações com a Política Monetária") analisa os efeitos de choques na política fiscal sobre a dinâmica da economia e a interação entre as políticas fiscal e monetária usando modelos SVARs. Testamos a Teoria Fiscal do Nível de Preços para o Brasil analisando a resposta do passivo do setor público a choques no superávit primário. Para a identificação híbrida, encontramos que não é possível distinguir empiricamente entre os regimes Ricardiano (Dominância Monetária) e não-Ricardiano (Dominância Fiscal). Entretanto, utilizando a identificação de restrições de sinais, existe evidência que o governo seguiu um regime Ricardiano (Dominância Monetária) de janeiro de 2000 a junho de 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian inflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in different measures of forecasting accuracy are substantial, especially for short horizons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian in ation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in di¤erent measures of forecasting accuracy are substantial, especially for short horizons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. A Monte Carlo study explores the finite sample performance of this procedure and evaluates the forecasting accuracy of models selected by this procedure. Two empirical applications confirm the usefulness of the model selection procedure proposed here for forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work aims to study the macroeconomic factors influence in credit risk for installment autoloans operations. The study is based on 4.887 credit operations surveyed in the Credit Risk Information System (SCR) hold by the Brazilian Central Bank. Using Survival Analysis applied to interval censured data, we achieved a model to estimate the hazard function and we propose a method for calculating the probability of default in a twelve month period. Our results indicate a strong time dependence for the hazard function by a polynomial approximation in all estimated models. The model with the best Akaike Information Criteria estimate a positive effect of 0,07% for males over de basic hazard function, and 0,011% for the increasing of ten base points on the operation annual interest rate, toward, for each R$ 1.000,00 on the installment, the hazard function suffer a negative effect of 0,28% , and an estimated elevation of 0,0069% for the same amount added to operation contracted value. For de macroeconomics factors, we find statistically significant effects for the unemployment rate (-0,12%) , for the one lag of the unemployment rate (0,12%), for the first difference of the industrial product index(-0,008%), for one lag of inflation rate (-0,13%) and for the exchange rate (-0,23%). We do not find statistic significant results for all other tested variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelos de tomada de decisão necessitam refletir os aspectos da psi- cologia humana. Com este objetivo, este trabalho é baseado na Sparse Distributed Memory (SDM), um modelo psicologicamente e neuro- cientificamente plausível da memória humana, publicado por Pentti Kanerva, em 1988. O modelo de Kanerva possui um ponto crítico: um item de memória aquém deste ponto é rapidamente encontrado, e items além do ponto crítico não o são. Kanerva calculou este ponto para um caso especial com um seleto conjunto de parâmetros (fixos). Neste trabalho estendemos o conhecimento deste ponto crítico, através de simulações computacionais, e analisamos o comportamento desta “Critical Distance” sob diferentes cenários: em diferentes dimensões; em diferentes números de items armazenados na memória; e em diferentes números de armazenamento do item. Também é derivada uma função que, quando minimizada, determina o valor da “Critical Distance” de acordo com o estado da memória. Um objetivo secundário do trabalho é apresentar a SDM de forma simples e intuitiva para que pesquisadores de outras áreas possam imaginar como ela pode ajudá-los a entender e a resolver seus problemas.