901 resultados para Bayes Estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend the standard price discovery analysis to estimate the information share of dual-class shares across domestic and foreign markets. By examining both common and preferred shares, we aim to extract information not only about the fundamental value of the rm, but also about the dual-class premium. In particular, our interest lies on the price discovery mechanism regulating the prices of common and preferred shares in the BM&FBovespa as well as the prices of their ADR counterparts in the NYSE and in the Arca platform. However, in the presence of contemporaneous correlation between the innovations, the standard information share measure depends heavily on the ordering we attribute to prices in the system. To remain agnostic about which are the leading share class and market, one could for instance compute some weighted average information share across all possible orderings. This is extremely inconvenient given that we are dealing with 2 share prices in Brazil, 4 share prices in the US, plus the exchange rate (and hence over 5,000 permutations!). We thus develop a novel methodology to carry out price discovery analyses that does not impose any ex-ante assumption about which share class or trading platform conveys more information about shocks in the fundamental price. As such, our procedure yields a single measure of information share, which is invariant to the ordering of the variables in the system. Simulations of a simple market microstructure model show that our information share estimator works pretty well in practice. We then employ transactions data to study price discovery in two dual-class Brazilian stocks and their ADRs. We uncover two interesting ndings. First, the foreign market is at least as informative as the home market. Second, shocks in the dual-class premium entail a permanent e ect in normal times, but transitory in periods of nancial distress. We argue that the latter is consistent with the expropriation of preferred shareholders as a class.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents semiparametric estimators for treatment effects parameters when selection to treatment is based on observable characteristics. The parameters of interest in this paper are those that capture summarized distributional effects of the treatment. In particular, the focus is on the impact of the treatment calculated by differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here inequality treatment effects. The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the reweighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are.computed. Calculations of semiparametric effciency bounds for inequality treatment effects parameters are presented. Root-N consistency, asymptotic normality, and the achievement of the semiparametric efficiency bound are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, many central banks have adopted inflation targeting policies starting an intense debate about which measure of inflation to adopt. The literature on core inflation has tried to develop indicators of inflation which would respond only to "significant" changes in inflation. This paper defines a measure of core inflation as the common trend of prices in a multivariate dynamic model, that has, by construction, three properties: it filters idiosyncratic and transitory macro noises, and it leads the future leveI of headline inflation. We also show that the popular trimmed mean estimator of core inflation could be regarded as a proxy for the ideal GLS estimator for heteroskedastic data. We employ an asymmetric trimmed mean estimator to take account of possible skewness of the distribution, and we obtain an unconditional measure of core inflation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose mo deIs to analyze animal growlh data wilh lhe aim of eslimating and predicting quanlities of Liological and economical interest such as the maturing rate and asymptotic weight. lt is also studied lhe effect of environmenlal facLors of relevant influence in the growlh processo The models considered in this paper are based on an extension and specialization of the dynamic hierarchical model (Gamerman " Migon, 1993) lo a non-Iinear growlh curve sdLillg, where some of the growth curve parameters are considered cxchangeable among lhe unils. The inferencc for thcse models are appruximale conjugale analysis Lascd on Taylor series cxpallsiulIs aliei linear Bayes procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The heteroskedasticity-consistent covariance matrix estimator proposed by White (1980), also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto, Ferrari & Cordeiro (2000) have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that also commonly used by practitioners. These include the HC1, HC2 and HC3 estimators, which have proven to have superior small-sample behavior relative to White’s estimator. This paper defines a general bias-correction mechamism that can be applied not only to White’s estimator, but to variants of this estimator as well, such as HC1, HC2 and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este Trabalho se Dedica ao exercício empírico de gerar mais restrições ao modelo de apreçamento de ativos com séries temporais desenvolvido por Hansen e Singleton JPE 1983. As restrições vão, desde um simples aumento qualitativo nos ativos estudados até uma extensão teórica proposta a partir de um estimador consistente do fator estocástico de desconto. As estimativas encontradas para a aversão relativa ao risco do agente representativo estão dentro do esperado, na maioria dos casos, já que atingem valores já encontrados na literatura além do fato destes valores serem economicamente plausíveis. A extensão teórica proposta não atingiu resultados esperados, parecendo melhorar a estimação do sistema marginalmente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelos para detecção de fraude são utilizados para identificar se uma transação é legítima ou fraudulenta com base em informações cadastrais e transacionais. A técnica proposta no estudo apresentado, nesta dissertação, consiste na de Redes Bayesianas (RB); seus resultados foram comparados à técnica de Regressão Logística (RL), amplamente utilizada pelo mercado. As Redes Bayesianas avaliadas foram os classificadores bayesianos, com a estrutura Naive Bayes. As estruturas das redes bayesianas foram obtidas a partir de dados reais, fornecidos por uma instituição financeira. A base de dados foi separada em amostras de desenvolvimento e validação por cross validation com dez partições. Naive Bayes foram os classificadores escolhidos devido à simplicidade e a sua eficiência. O desempenho do modelo foi avaliado levando-se em conta a matriz de confusão e a área abaixo da curva ROC. As análises dos modelos revelaram desempenho, levemente, superior da regressão logística quando comparado aos classificadores bayesianos. A regressão logística foi escolhida como modelo mais adequado por ter apresentado melhor desempenho na previsão das operações fraudulentas, em relação à matriz de confusão. Baseada na área abaixo da curva ROC, a regressão logística demonstrou maior habilidade em discriminar as operações que estão sendo classificadas corretamente, daquelas que não estão.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Master Thesis consists of one theoretical article and one empirical article on the field of Microeconometrics. The first chapter\footnote{We also thank useful suggestions by Marinho Bertanha, Gabriel Cepaluni, Brigham Frandsen, Dalia Ghanem, Ricardo Masini, Marcela Mello, Áureo de Paula, Cristine Pinto, Edson Severnini and seminar participants at São Paulo School of Economics, the California Econometrics Conference 2015 and the 37\textsuperscript{th} Brazilian Meeting of Econometrics.}, called \emph{Synthetic Control Estimator: A Generalized Inference Procedure and Confidence Sets}, contributes to the literature about inference techniques of the Synthetic Control Method. This methodology was proposed to answer questions involving counterfactuals when only one treated unit and a few control units are observed. Although this method was applied in many empirical works, the formal theory behind its inference procedure is still an open question. In order to fulfill this lacuna, we make clear the sufficient hypotheses that guarantee the adequacy of Fisher's Exact Hypothesis Testing Procedure for panel data, allowing us to test any \emph{sharp null hypothesis} and, consequently, to propose a new way to estimate Confidence Sets for the Synthetic Control Estimator by inverting a test statistic, the first confidence set when we have access only to finite sample, aggregate level data whose cross-sectional dimension may be larger than its time dimension. Moreover, we analyze the size and the power of the proposed test with a Monte Carlo experiment and find that test statistics that use the synthetic control method outperforms test statistics commonly used in the evaluation literature. We also extend our framework for the cases when we observe more than one outcome of interest (simultaneous hypothesis testing) or more than one treated unit (pooled intervention effect) and when heteroskedasticity is present. The second chapter, called \emph{Free Economic Area of Manaus: An Impact Evaluation using the Synthetic Control Method}, is an empirical article. We apply the synthetic control method for Brazilian city-level data during the 20\textsuperscript{th} Century in order to evaluate the economic impact of the Free Economic Area of Manaus (FEAM). We find that this enterprise zone had positive significant effects on Real GDP per capita and Services Total Production per capita, but it also had negative significant effects on Agriculture Total Production per capita. Our results suggest that this subsidy policy achieve its goal of promoting regional economic growth, even though it may have provoked mis-allocation of resources among economic sectors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUÇÃO: O vírus da dengue é transmitido pela picada do mosquito Aedes aegypti e, o atual programa de controle não atinge o objetivo de impedir sua transmissão. Este trabalho objetivou analisar a relação entre a distribuição espaço-temporal de casos de dengue e os indicadores larvários no município de Tupã, de janeiro de 2004 a dezembro de 2007. MÉTODOS: Foram construídos indicadores larvários por quarteirão e totalidade do município. Utilizou-se o método cross-lagged correlation para avaliar a correlação entre casos de dengue e indicadores larvários. Foi utilizado estimador kernel para análise espacial. RESULTADOS: A correlação cruzada defasada entre casos de dengue e indicadores larvários foi significativa. Os mapas do estimador Kernel da positividade de recipientes indicam uma distribuição heterogênea, ao longo do período estudado. Nos dois anos de transmissão, a epidemia ocorreu em diferentes regiões. CONCLUSÕES: Não ficou evidenciada relação espacial entre infestação larvária e ocorrência de dengue. A incorporação de técnicas de geoprocessamento e análise espacial no programa, desde que utilizados imediatamente após a realização das atividades, podem contribuir com as ações de controle, indicando os aglomerados espaciais de maior incidência.