27 resultados para test de Monte Carlo
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
Resumo:
Neste trabalho, analisamos utilização da metodologia CreditRLsk+ do Credit Suisse sua adequação ao mercado brasileiro, com objetivo de calcular risco de uma carteira de crédito. Certas hipóteses assumidas na formulação do modelo CreditRisk+ não valem para o mercado brasileiro, caracterizado, por exemplo, por uma elevada probabilidade de defcnilt. Desenvolvemos, então, uma metodologia para cálculo da distribuição de perdas através do método de Simulação de Monte Cario, alterando algumas hipóteses originais do modelo com objetivo de adaptá-lo ao nosso mercado. utilização de simulações também oferece resultados mais precisos em situações onde as carteiras possuem uma pequena população de contratos, além de eliminar possíveis problemas de convergência do método analítico, mesmo considerando as hipóteses do modelo original. Verifica-se ainda que tempo computacional pode ser menor que da metodologia original, principalmente em carteiras com elevado número de devedores de perfis distintos com alocações em diversos setores da economia. Tendo em vista as restrições acima, acreditamos que metodologia proposta seja uma alternativa para forma analítica do modelo CreditRisk+. Apresentamos exemplos de utilização resultados providos por estas simulações. ponto central deste trabalho realçar importância da utilização de metodologias alternativas de medição de risco de crédito que incorporem as particularidades do mercado brasileiro.
Resumo:
O presente trabalho tem por objetivo descrever, avaliar comparar as metodologias analítica da simulação Monte Cario para cálculo do Value at Risk (Valor em Risco) de instituições financeiras de empresas. Para comparar as vantagens desvantagens de cada metodologia, efetuaremos comparações algébricas realizamos diversos testes empíricos com instituições hipotéticas que apresentassem diferentes níveis de alavancagem de composição em seus balanços, que operassem em diferentes mercados (consideramos os mercados de ações, de opções de compra de títulos de renda fixa prefixados).
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Trata da aplicabilidade da Simulação de Monte Carlo para a análise de riscos e, conseqüentemente, o apoio à decisão de investir ou não em um projeto. São abordados métodos de análise de riscos e seleção de projetos, bem como a natureza, vantagens e limitações da Simulação de Monte Carío. Por fim este instrumento tem sua viabilidade analisada sob a luz do processo de análise de riscos de uma empresa brasileira.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
Despite the belief, supported byrecentapplied research, thataggregate datadisplay short-run comovement, there has been little discussion about the econometric consequences ofthese data “features.” W e use exhaustive M onte-Carlo simulations toinvestigate theimportance ofrestrictions implied by common-cyclicalfeatures for estimates and forecasts based on vectorautoregressive and errorcorrection models. First, weshowthatthe“best” empiricalmodeldevelopedwithoutcommoncycles restrictions neednotnestthe“best” modeldevelopedwiththoserestrictions, duetothe use ofinformation criteria forchoosingthe lagorderofthe twoalternative models. Second, weshowthatthecosts ofignoringcommon-cyclicalfeatures inV A R analysis may be high in terms offorecastingaccuracy and e¢ciency ofestimates ofvariance decomposition coe¢cients. A lthough these costs are more pronounced when the lag orderofV A R modelsareknown, theyarealsonon-trivialwhenitis selectedusingthe conventionaltoolsavailabletoappliedresearchers. T hird, we…ndthatifthedatahave common-cyclicalfeatures andtheresearcherwants touseaninformationcriterium to selectthelaglength, theH annan-Q uinn criterium is themostappropriate, sincethe A kaike and theSchwarz criteriahave atendency toover- and under-predictthe lag lengthrespectivelyinoursimulations.
Resumo:
A situação do saneamento no Brasil é alarmante. Os serviços de água e esgotamento sanitário são prestados adequadamente somente para 59,4% e 39,7%, respectivamente, da população brasileira. Para mudar este quadro, estima-se que sejam necessários R$ 304 bilhões em investimentos. Parte desse volume terá que vir da iniciativa privada e a estruturação de parcerias público privadas é uma das formas de atingir este objetivo. Nestes projetos é comum o setor público oferecer garantias ao parceiro privado para assegurar a viabilidade do empreendimento. O presente trabalho apresenta um modelo para valoração destas garantias, utilizando como estudos de caso as PPP de esgoto da região metropolitana de Recife e do Município de Goiana. O resultado obtido mostrou a importância desta valoração, uma vez que dependendo do nível de garantia oferecida o valor presente dos desembolsos previstos para o setor público variou de zero a até R$ 204 milhões.
Resumo:
This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.
Resumo:
This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.
Resumo:
This Thesis is the result of my Master Degree studies at the Graduate School of Economics, Getúlio Vargas Foundation, from January 2004 to August 2006. am indebted to my Thesis Advisor, Professor Luiz Renato Lima, who introduced me to the Econometrics' world. In this Thesis, we study time-varying quantile process and we develop two applications, which are presented here as Part and Part II. Each of these parts was transformed in paper. Both papers were submitted. Part shows that asymmetric persistence induces ARCH effects, but the LMARCH test has power against it. On the other hand, the test for asymmetric dynamics proposed by Koenker and Xiao (2004) has correct size under the presence of ARCH errors. These results suggest that the LM-ARCH and the Koenker-Xiao tests may be used in applied research as complementary tools. In the Part II, we compare four different Value-at-Risk (VaR) methodologies through Monte Cario experiments. Our results indicate that the method based on quantile regression with ARCH effect dominates other methods that require distributional assumption. In particular, we show that the non-robust method ologies have higher probability to predict VaRs with too many violations. We illustrate our findings with an empirical exercise in which we estimate VaR for returns of São Paulo stock exchange index, IBOVESPA, during periods of market turmoil. Our results indicate that the robust method based on quantile regression presents the least number of violations.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
This paper proposes unit tests based on partially adaptive estimation. The proposed tests provide an intermediate class of inference procedures that are more efficient than the traditional OLS-based methods and simpler than unit root tests based on fully adptive estimation using nonparametric methods. The limiting distribution of the proposed test is a combination of standard normal and the traditional Dickey-Fuller (DF) distribution, including the traditional ADF test as a special case when using Gaussian density. Taking into a account the well documented characteristic of heavy-tail behavior in economic and financial data, we consider unit root tests coupled with a class of partially adaptive M-estimators based on the student-t distributions, wich includes te normal distribution as a limiting case. Monte Carlo Experiments indicate that, in the presence of heavy tail distributions or innovations that are contaminated by outliers, the proposed test is more powerful than the traditional ADF test. We apply the proposed test to several macroeconomic time series that have heavy-tailed distributions. The unit root hypothesis is rejected in U.S. real GNP, supporting the literature of transitory shocks in output. However, evidence against unit roots is not found in real exchange rate and nominal interest rate even haevy-tail is taken into a account.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
While it is recognized that output fuctuations are highly persistent over certain range, less persistent results are also found around very long horizons (Conchrane, 1988), indicating the existence of local or temporary persistency. In this paper, we study time series with local persistency. A test for stationarity against locally persistent alternative is proposed. Asymptotic distributions of the test statistic are provided under both the null and the alternative hypothesis of local persistency. Monte Carlo experiment is conducted to study the power and size of the test. An empirical application reveals that many US real economic variables may exhibit local persistency.