15 resultados para semi-parametric models
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.
Resumo:
The goal of this paper is to introduce a class of tree-structured models that combines aspects of regression trees and smooth transition regression models. The model is called the Smooth Transition Regression Tree (STR-Tree). The main idea relies on specifying a multiple-regime parametric model through a tree-growing procedure with smooth transitions among different regimes. Decisions about splits are entirely based on a sequence of Lagrange Multiplier (LM) tests of hypotheses.
Resumo:
This work investigates the impact of schooling Oil income distribution in statesjregions of Brazil. Using a semi-parametric model, discussed in DiNardo, Fortin & Lemieux (1996), we measure how much income diíferences between the Northeast and Southeast regions- the country's poorest and richest - and between the states of Ceará and São Paulo in those regions - can be explained by differences in schooling leveIs of the resident population. Using data from the National Household Survey (PNAD), we construct counterfactual densities by reweighting the distribution of the poorest region/state by the schooling profile of the richest. We conclude that: (i) more than 50% of the income di:fference is explained by the difference in schooling; (ii) the highest deciles of the income distribution gain more from an increase in schooling, closely approaching the wage distribution of the richest region/state; and (iii) an increase in schooling, holding the wage structure constant, aggravates the wage disparity in the poorest regions/ states.
Resumo:
Dentre os principais desafios enfrentados no cálculo de medidas de risco de portfólios está em como agregar riscos. Esta agregação deve ser feita de tal sorte que possa de alguma forma identificar o efeito da diversificação do risco existente em uma operação ou em um portfólio. Desta forma, muito tem se feito para identificar a melhor forma para se chegar a esta definição, alguns modelos como o Valor em Risco (VaR) paramétrico assumem que a distribuição marginal de cada variável integrante do portfólio seguem a mesma distribuição , sendo esta uma distribuição normal, se preocupando apenas em modelar corretamente a volatilidade e a matriz de correlação. Modelos como o VaR histórico assume a distribuição real da variável e não se preocupam com o formato da distribuição resultante multivariada. Assim sendo, a teoria de Cópulas mostra-se um grande alternativa, à medida que esta teoria permite a criação de distribuições multivariadas sem a necessidade de se supor qualquer tipo de restrição às distribuições marginais e muito menos as multivariadas. Neste trabalho iremos abordar a utilização desta metodologia em confronto com as demais metodologias de cálculo de Risco, a saber: VaR multivariados paramétricos - VEC, Diagonal,BEKK, EWMA, CCC e DCC- e VaR histórico para um portfólio resultante de posições idênticas em quatro fatores de risco – Pre252, Cupo252, Índice Bovespa e Índice Dow Jones
Resumo:
Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).
Resumo:
Produtividade é frequentemente calculada pela aproximação da função de produção Cobb-Douglas. Tal estimativa, no entanto, pode sofrer de simultaneidade e viés de seleção dos insumos. Olley e Pakes (1996) introduziu um método semi-paramétrico que nos permite estimar os parâmetros da função de produção de forma consistente e, assim, obter medidas de produtividade confiável, controlando tais problemas de viés. Este estudo aplica este método em uma empresa do setor sucroalcooleiro e utiliza o comando opreg do Stata com a finalidade de estimar a função produção, descrevendo a intuição econômica por trás dos resultados.
Resumo:
Esta tese é composta de três artigos que analisam a estrutura a termo das taxas de juros usando diferentes bases de dados e modelos. O capítulo 1 propõe um modelo paramétrico de taxas de juros que permite a segmentação e choques locais na estrutura a termo. Adotando dados do tesouro americano, duas versões desse modelo segmentado são implementadas. Baseado em uma sequência de 142 experimentos de previsão, os modelos propostos são comparados à benchmarks e concluí-se que eles performam melhor nos resultados das previsões fora da amostra, especialmente para as maturidades curtas e para o horizonte de previsão de 12 meses. O capítulo 2 acrescenta restrições de não arbitragem ao estimar um modelo polinomial gaussiano dinâmico de estrutura a termo para o mercado de taxas de juros brasileiro. Esse artigo propõe uma importante aproximação para a série temporal dos fatores de risco da estrutura a termo, que permite a extração do prêmio de risco das taxas de juros sem a necessidade de otimização de um modelo dinâmico completo. Essa metodologia tem a vantagem de ser facilmente implementada e obtém uma boa aproximação para o prêmio de risco da estrutura a termo, que pode ser usada em diferentes aplicações. O capítulo 3 modela a dinâmica conjunta das taxas nominais e reais usando um modelo afim de não arbitagem com variáveis macroeconômicas para a estrutura a termo, afim de decompor a diferença entre as taxas nominais e reais em prêmio de risco de inflação e expectativa de inflação no mercado americano. Uma versão sem variáveis macroeconômicas e uma versão com essas variáveis são implementadas e os prêmios de risco de inflação obtidos são pequenos e estáveis no período analisado, porém possuem diferenças na comparação dos dois modelos analisados.
Resumo:
This paper presents a methodology to estimate and identify different kinds of economic interaction, whenever these interactions can be established in the form of spatial dependence. First, we apply the semi-parametric approach of Chen and Conley (2001) to the estimation of reaction functions. Then, the methodology is applied to the analysis financial providers in Thailand. Based on a sample of financial institutions, we provide an economic framework to test if the actual spatial pattern is compatible with strategic competition (local interactions) or social planning (global interactions). Our estimates suggest that the provision of commercial banks and suppliers credit access is determined by spatial competition, while the Thai Bank of Agriculture and Agricultural Cooperatives is distributed as in a social planner problem.
Resumo:
This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.
Resumo:
This paper deals with the testing of autoregressive conditional duration (ACD) models by gauging the distance between the parametric density and hazard rate functions implied by the duration process and their non-parametric estimates. We derive the asymptotic justification using the functional delta method for fixed and gamma kernels, and then investigate the finite-sample properties through Monte Carlo simulations. Although our tests display some size distortion, bootstrapping suffices to correct the size without compromising their excellent power. We show the practical usefulness of such testing procedures for the estimation of intraday volatility patterns.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
This paper investigates which properties money-demand functions have to satisfy to be consistent with multidimensional extensions of Lucasí(2000) versions of the Sidrauski (1967) and the shopping-time models. We also investigate how such classes of models relate to each other regarding the rationalization of money demands. We conclude that money demand functions rationalizable by the shoppingtime model are always rationalizable by the Sidrauski model, but that the converse is not true. The log-log money demand with an interest-rate elasticity greater than or equal to one and the semi-log money demand are counterexamples.
Resumo:
The present work has as main objective the identification and impact analysis for the practice ITIL in the organizational flexibility of a multinational IT company, being this study of quali-quantitative and exploratory nature. To achieve this objective, some theoretical studies on bureaucracy, organization flexibility, control, IT governance and ITIL were done, as a form to better understand the research problem. For analysis effect a set of eleven ITIL process was considered in this research ¿ service desk, incident management, problem management, change management, configuration management, release management, service level management, availability management, capacity management, continuity management and finally IT financial services management ¿ grouped in its two core areas ¿ service support and service delivery. Then a scale was constructed and validated, on the basis of theoretical models developed by Volberda (1997), Tenório (2002) and Golden and Powell (1999), to measure the flexibility related to each process comprising the ITIL core. The dimensions adopted to measure flexibility were: organization design task, managerial task, IT impact on work force, HR management, efficiency impact, sensitivity, versatility and robustness. The instrument used in research was a semi-structured interview, which was divided in two parts. The data collection was performed with ten interviewed people from an IT multinational company, based on convenience, some were managers and there were users, some were ITIL certified and others not. The statistic tests of t student and Wilcoxon non-parametric were adopted. The result of the research indicated that the ITIL service support area, for possessing greater operational focus, presents flexibility trend. The opposite was found for the service delivery area, which has greater tactical focus. The results also suggest that the change management discipline was the one that contributed for the most flexibility inside the company, followed by incident management discipline and the service desk function.
Resumo:
This paper provides a systematic and unified treatment of the developments in the area of kernel estimation in econometrics and statistics. Both the estimation and hypothesis testing issues are discussed for the nonparametric and semiparametric regression models. A discussion on the choice of windowwidth is also presented.
Resumo:
Life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this article, we relax this assumption – not supported by the data - and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets, which is estimated using a non-parametric method applied to data from the Survey of Consumer Finances. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets.