26 resultados para Sequential Monte Carlo methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we compare four different Value-at-Risk (V aR) methodologies through Monte Carlo experiments. Our results indicate that the method based on quantile regression with ARCH effect dominates other methods that require distributional assumption. In particular, we show that the non-robust methodologies have higher probability to predict V aRs with too many violations. We illustrate our findings with an empirical exercise in which we estimate V aR for returns of S˜ao Paulo stock exchange index, IBOVESPA, during periods of market turmoil. Our results indicate that the robust method based on quantile regression presents the least number of violations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os objetivos deste trabalho foram (i) rever métodos numéricos para precificação de derivativos; e (ii) comparar os métodos assumindo que os preços de mercado refletem àqueles obtidos pela fórmula de Black Scholes para precificação de opções do tipo européia. Aplicamos estes métodos para precificar opções de compra da ações Telebrás. Os critérios de acurácia e de custo computacional foram utilizados para comparar os seguintes modelos binomial, Monte Carlo, e diferenças finitas. Os resultados indicam que o modelo binomial possui boa acurácia e custo baixo, seguido pelo Monte Carlo e diferenças finitas. Entretanto, o método Monte Carlo poderia ser usado quando o derivativo depende de mais de dois ativos-objetos. É recomendável usar o método de diferenças finitas quando se obtém uma equação diferencial parcial cuja solução é o valor do derivativo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mensalmente são publicados relatórios pelo Departamento de Agricultura dos Estados Unidos (USDA) onde são divulgados dados de condições das safras, oferta e demanda globais, nível dos estoques, que servem como referência para todos os participantes do mercado de commodities agrícolas. Esse mercado apresenta uma volatilidade acentuada no período de divulgação dos relatórios. Um modelo de volatilidade estocástica com saltos é utilizado para a dinâmica de preços de milho e de soja. Não existe um modelo ‘ideal’ para tal fim, cada um dos existentes têm suas vantagens e desvantagens. O modelo escolhido foi o de Oztukel e Wilmott (1998), que é um modelo de volatilidade estocástica empírica, incrementado com saltos determinísticos. Empiricamente foi demonstrado que um modelo de volatilidade estocástica pode ser bem ajustado ao mercado de commodities, e o processo de jump-diffusion pode representar bem os saltos que o mercado apresenta durante a divulgação dos relatórios. As opções de commodities agrícolas que são negociadas em bolsa são do tipo americanas, então alguns métodos disponíveis poderiam ser utilizados para precificar opções seguindo a dinâmica do modelo proposto. Dado que o modelo escolhido é um modelo multi-fatores, então o método apropriado para a precificação é o proposto por Longstaff e Schwartz (2001) chamado de Monte Carlo por mínimos quadrados (LSM). As opções precificadas pelo modelo são utilizadas em uma estratégia de hedge de uma posição física de milho e de soja, e a eficiência dessa estratégia é comparada com estratégias utilizando-se instrumentos disponíveis no mercado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alavancagem em hedge funds tem preocupado investidores e estudiosos nos últimos anos. Exemplos recentes de estratégias desse tipo se mostraram vantajosos em períodos de pouca incerteza na economia, porém desastrosos em épocas de crise. No campo das finanças quantitativas, tem-se procurado encontrar o nível de alavancagem que otimize o retorno de um investimento dado o risco que se corre. Na literatura, os estudos têm se mostrado mais qualitativos do que quantitativos e pouco se tem usado de métodos computacionais para encontrar uma solução. Uma forma de avaliar se alguma estratégia de alavancagem aufere ganhos superiores do que outra é definir uma função objetivo que relacione risco e retorno para cada estratégia, encontrar as restrições do problema e resolvê-lo numericamente por meio de simulações de Monte Carlo. A presente dissertação adotou esta abordagem para tratar o investimento em uma estratégia long-short em um fundo de investimento de ações em diferentes cenários: diferentes formas de alavancagem, dinâmicas de preço das ações e níveis de correlação entre esses preços. Foram feitas simulações da dinâmica do capital investido em função das mudanças dos preços das ações ao longo do tempo. Considerou-se alguns critérios de garantia de crédito, assim como a possibilidade de compra e venda de ações durante o período de investimento e o perfil de risco do investidor. Finalmente, estudou-se a distribuição do retorno do investimento para diferentes níveis de alavancagem e foi possível quantificar qual desses níveis é mais vantajoso para a estratégia de investimento dadas as restrições de risco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to analyze extremal events using Generalized Pareto Distributions (GPD), considering explicitly the uncertainty about the threshold. Current practice empirically determines this quantity and proceeds by estimating the GPD parameters based on data beyond it, discarding all the information available be10w the threshold. We introduce a mixture model that combines a parametric form for the center and a GPD for the tail of the distributions and uses all observations for inference about the unknown parameters from both distributions, the threshold inc1uded. Prior distribution for the parameters are indirectly obtained through experts quantiles elicitation. Posterior inference is available through Markov Chain Monte Carlo (MCMC) methods. Simulations are carried out in order to analyze the performance of our proposed mode1 under a wide range of scenarios. Those scenarios approximate realistic situations found in the literature. We also apply the proposed model to a real dataset, Nasdaq 100, an index of the financiai market that presents many extreme events. Important issues such as predictive analysis and model selection are considered along with possible modeling extensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When estimating policy parameters, also known as treatment effects, the assignment to treatment mechanism almost always causes endogeneity and thus bias many of these policy parameters estimates. Additionally, heterogeneity in program impacts is more likely to be the norm than the exception for most social programs. In situations where these issues are present, the Marginal Treatment Effect (MTE) parameter estimation makes use of an instrument to avoid assignment bias and simultaneously to account for heterogeneous effects throughout individuals. Although this parameter is point identified in the literature, the assumptions required for identification may be strong. Given that, we use weaker assumptions in order to partially identify the MTE, i.e. to stablish a methodology for MTE bounds estimation, implementing it computationally and showing results from Monte Carlo simulations. The partial identification we perfom requires the MTE to be a monotone function over the propensity score, which is a reasonable assumption on several economics' examples, and the simulation results shows it is possible to get informative even in restricted cases where point identification is lost. Additionally, in situations where estimated bounds are not informative and the traditional point identification is lost, we suggest a more generic method to point estimate MTE using the Moore-Penrose Pseudo-Invese Matrix, achieving better results than traditional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).