28 resultados para Monte-carlo Simulations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops nonparametric tests of independence between two stationary stochastic processes. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, I take advantage of a generalized entropic measure so as to build a class of nonparametric tests of independence. Asymptotic normality and local power are derived using the functional delta method for kernels, whereas finite sample properties are investigated through Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reinterprets results of Ohanissian et al (2003) to show the asymptotic equivalence of temporally aggregating series and using less bandwidth in estimating long memory by Geweke and Porter-Hudak’s (1983) estimator, provided that the same number of periodogram ordinates is used in both cases. This equivalence is in the sense that their joint distribution is asymptotically normal with common mean and variance and unity correlation. Furthermore, I prove that the same applies to the estimator of Robinson (1995). Monte Carlo simulations show that this asymptotic equivalence is a good approximation in finite samples. Moreover, a real example with the daily US Dollar/French Franc exchange rate series is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian in ation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in di¤erent measures of forecasting accuracy are substantial, especially for short horizons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alavancagem em hedge funds tem preocupado investidores e estudiosos nos últimos anos. Exemplos recentes de estratégias desse tipo se mostraram vantajosos em períodos de pouca incerteza na economia, porém desastrosos em épocas de crise. No campo das finanças quantitativas, tem-se procurado encontrar o nível de alavancagem que otimize o retorno de um investimento dado o risco que se corre. Na literatura, os estudos têm se mostrado mais qualitativos do que quantitativos e pouco se tem usado de métodos computacionais para encontrar uma solução. Uma forma de avaliar se alguma estratégia de alavancagem aufere ganhos superiores do que outra é definir uma função objetivo que relacione risco e retorno para cada estratégia, encontrar as restrições do problema e resolvê-lo numericamente por meio de simulações de Monte Carlo. A presente dissertação adotou esta abordagem para tratar o investimento em uma estratégia long-short em um fundo de investimento de ações em diferentes cenários: diferentes formas de alavancagem, dinâmicas de preço das ações e níveis de correlação entre esses preços. Foram feitas simulações da dinâmica do capital investido em função das mudanças dos preços das ações ao longo do tempo. Considerou-se alguns critérios de garantia de crédito, assim como a possibilidade de compra e venda de ações durante o período de investimento e o perfil de risco do investidor. Finalmente, estudou-se a distribuição do retorno do investimento para diferentes níveis de alavancagem e foi possível quantificar qual desses níveis é mais vantajoso para a estratégia de investimento dadas as restrições de risco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho é revisar os principais aspectos teóricos para a aplicação de Opções Reais em avaliação de projetos de investimento e analisar, sob esta metodologia, um caso real de projeto para investir na construção de uma Planta de Liquefação de gás natural. O estudo do caso real considerou a Opção de Troca de Mercado, ao avaliar a possibilidade de colocação de cargas spot de GNL em diferentes mercados internacionais e a Opção de Troca de Produto, devido à flexibilidade gerencial de não liquefazer o gás natural, deixando de comercializar GNL no mercado internacional e passando a vender gás natural seco no mercado doméstico. Para a valoração das Opções Reais foi verificado, através da série histórica dos preços de gás natural, que o Movimento Geométrico Browniano não é rejeitado e foram utilizadas simulações de Monte Carlo do processo estocástico neutro ao risco dos preços. O valor da Opção de Troca de Mercado fez o projeto estudado mais que dobrar de valor, sendo reduzido com o aumento da correlação dos preços. Por outro lado, o valor da Opção de Troca de Produto é menos relevante, mas também pode atingir valores significativos com o incremento de sua volatilidade. Ao combinar as duas opções simultaneamente, foi verificado que as mesmas não são diretamente aditivas e que o efeito do incremento da correlação dos preços, ao contrário do que ocorre na Opção de Troca de Mercado, é inverso na Opção de Troca de Produto, ou seja, o derivativo aumenta de valor com uma maior correlação, apesar do valor total das opções integradas diminuir.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho analisa, sob a ótica da sustentabilidade da dívida, os efeitos de se manter um elevado nível de reservas internacionais juntamente com um elevado estoque de dívida pública. Busca-se o nível ótimo de reservas para o Brasil através de uma ferramenta de gestão de risco, por simulações de Monte Carlo. Considerando as variáveis estocásticas que afetam a equação de acumulação da dívida, e entendendo a relação entre elas, pode-se estudar as propriedades estocásticas da dinâmica da dívida. Da mesma forma, podemos analisar o impacto fiscal de um determinado nível de reservas ao longo do tempo e verificar quais caminhos se mostram sustentáveis. Sob a ótica da sustentabilidade da dívida, a escolha que gera a melhor relação dívida líquida / PIB para o Brasil é aquela que utiliza o máximo das reservas internacionais para reduzir o endividamento local. No entanto, como há aspectos não capturados nesta análise, tais como os benefícios das reservas em prevenir crises e em funcionar como garantia para investimentos externos, sugere-se que as reservas não excedam os níveis reconhecidos pela literatura internacional que atendam a estes fins. A indicação final deste estudo é que as reservas internacionais funcionam como um instrumento de proteção ao país quando o endividamento e o custo dele não são tão expressivos, como são atualmente no Brasil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When estimating policy parameters, also known as treatment effects, the assignment to treatment mechanism almost always causes endogeneity and thus bias many of these policy parameters estimates. Additionally, heterogeneity in program impacts is more likely to be the norm than the exception for most social programs. In situations where these issues are present, the Marginal Treatment Effect (MTE) parameter estimation makes use of an instrument to avoid assignment bias and simultaneously to account for heterogeneous effects throughout individuals. Although this parameter is point identified in the literature, the assumptions required for identification may be strong. Given that, we use weaker assumptions in order to partially identify the MTE, i.e. to stablish a methodology for MTE bounds estimation, implementing it computationally and showing results from Monte Carlo simulations. The partial identification we perfom requires the MTE to be a monotone function over the propensity score, which is a reasonable assumption on several economics' examples, and the simulation results shows it is possible to get informative even in restricted cases where point identification is lost. Additionally, in situations where estimated bounds are not informative and the traditional point identification is lost, we suggest a more generic method to point estimate MTE using the Moore-Penrose Pseudo-Invese Matrix, achieving better results than traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations