15 resultados para phylogeographical hypothesis testing

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The broader objective of this study undertaking can briefly be articulated in particulate aims as follows: to measure the attitudes of consumers regarding the brand displayed by this strategy as well as to highlight recall, recognition and purchase intentions generated by product placement on consumers. In addition, check the differences and similarities between the behavior of Brazilian and American consumers caused by the influence of product placements. The study was undertaken targeting consumer audience in Brazil and the U.S. A rang3 modeling set ups were performed in order to realign study instruments and hypothesis towards the research objectives. This study gave focus on the following hypothesized models. H1: Consumers / Participants who viewed the brands / products in the movie have a higher brand / product recall compared to the consumers / participants who did not view the brands / products in the movie. H2: US Consumers / Participants are able to recognize and recall brands / products which appear in the background of the movie than Brazil. H3: Consumers / participants from USA are more accepting of product placements compared to their counterparts in Brazil. H4: There are discernible similarities in consumer / participant brand attitudes and purchase intentions in consumers / participants from USA and Brazil in spite of the fact that their country of origin is different. Cronbach’s Alpha Coefficient ensured the reliability of survey instruments. The study involved the use of the Structural Equation Modeling (SEM) for the hypothesis testing. This study used the Confirmatory Factor Analysis (CFA) to assess both the convergent and discriminant validities instead of using the Exploratory Factor Analysis (EFA) or the Principal Component Analysis (PCA). This reinforced for the use of the regression Chi Square and T statistical tests in further. Only hypothesis H3 was rejected, the rest were not. T test provided insight findings on specific subgroup significant differences. In the SEM testing, the error variance for product placement attitudes was negative for both the groups. On this The Heywood Case came in handy to fix negative values. The researcher used both quantitative and qualitative approach where closed ended questionnaires and interviews respectively were used to collect primary data. The results were additionally provided with tabulations. It can be concluded that, product placement varies markedly in the U.S. from Brazil based on the influence a range of factors provided in the study. However, there are elements of convergence probably driven by the convergence in technology. In order, product placement to become more competitive in the promotional marketing, there will be the need for researchers to extend focus from the traditional variables and add knowledge on the conventional marketplace factors that is the sell-ability of the product placement technologies and strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação concentra-se nos processos estocásticos espaciais definidos em um reticulado, os chamados modelos do tipo Cliff & Ord. Minha contribuição nesta tese consiste em utilizar aproximações de Edgeworth e saddlepoint para investigar as propriedades em amostras finitas do teste para detectar a presença de dependência espacial em modelos SAR (autoregressivo espacial), e propor uma nova classe de modelos econométricos espaciais na qual os parâmetros que afetam a estrutura da média são distintos dos parâmetros presentes na estrutura da variância do processo. Isto permite uma interpretação mais clara dos parâmetros do modelo, além de generalizar uma proposta de taxonomia feita por Anselin (2003). Eu proponho um estimador para os parâmetros do modelo e derivo a distribuição assintótica do estimador. O modelo sugerido na dissertação fornece uma interpretação interessante ao modelo SARAR, bastante comum na literatura. A investigação das propriedades em amostras finitas dos testes expande com relação a literatura permitindo que a matriz de vizinhança do processo espacial seja uma função não-linear do parâmetro de dependência espacial. A utilização de aproximações ao invés de simulações (mais comum na literatura), permite uma maneira fácil de comparar as propriedades dos testes com diferentes matrizes de vizinhança e corrigir o tamanho ao comparar a potência dos testes. Eu obtenho teste invariante ótimo que é também localmente uniformemente mais potente (LUMPI). Construo o envelope de potência para o teste LUMPI e mostro que ele é virtualmente UMP, pois a potência do teste está muito próxima ao envelope (considerando as estruturas espaciais definidas na dissertação). Eu sugiro um procedimento prático para construir um teste que tem boa potência em uma gama de situações onde talvez o teste LUMPI não tenha boas propriedades. Eu concluo que a potência do teste aumenta com o tamanho da amostra e com o parâmetro de dependência espacial (o que está de acordo com a literatura). Entretanto, disputo a visão consensual que a potência do teste diminui a medida que a matriz de vizinhança fica mais densa. Isto reflete um erro de medida comum na literatura, pois a distância estatística entre a hipótese nula e a alternativa varia muito com a estrutura da matriz. Fazendo a correção, concluo que a potência do teste aumenta com a distância da alternativa à nula, como esperado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides a systematic and unified treatment of the developments in the area of kernel estimation in econometrics and statistics. Both the estimation and hypothesis testing issues are discussed for the nonparametric and semiparametric regression models. A discussion on the choice of windowwidth is also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Master Thesis consists of one theoretical article and one empirical article on the field of Microeconometrics. The first chapter\footnote{We also thank useful suggestions by Marinho Bertanha, Gabriel Cepaluni, Brigham Frandsen, Dalia Ghanem, Ricardo Masini, Marcela Mello, Áureo de Paula, Cristine Pinto, Edson Severnini and seminar participants at São Paulo School of Economics, the California Econometrics Conference 2015 and the 37\textsuperscript{th} Brazilian Meeting of Econometrics.}, called \emph{Synthetic Control Estimator: A Generalized Inference Procedure and Confidence Sets}, contributes to the literature about inference techniques of the Synthetic Control Method. This methodology was proposed to answer questions involving counterfactuals when only one treated unit and a few control units are observed. Although this method was applied in many empirical works, the formal theory behind its inference procedure is still an open question. In order to fulfill this lacuna, we make clear the sufficient hypotheses that guarantee the adequacy of Fisher's Exact Hypothesis Testing Procedure for panel data, allowing us to test any \emph{sharp null hypothesis} and, consequently, to propose a new way to estimate Confidence Sets for the Synthetic Control Estimator by inverting a test statistic, the first confidence set when we have access only to finite sample, aggregate level data whose cross-sectional dimension may be larger than its time dimension. Moreover, we analyze the size and the power of the proposed test with a Monte Carlo experiment and find that test statistics that use the synthetic control method outperforms test statistics commonly used in the evaluation literature. We also extend our framework for the cases when we observe more than one outcome of interest (simultaneous hypothesis testing) or more than one treated unit (pooled intervention effect) and when heteroskedasticity is present. The second chapter, called \emph{Free Economic Area of Manaus: An Impact Evaluation using the Synthetic Control Method}, is an empirical article. We apply the synthetic control method for Brazilian city-level data during the 20\textsuperscript{th} Century in order to evaluate the economic impact of the Free Economic Area of Manaus (FEAM). We find that this enterprise zone had positive significant effects on Real GDP per capita and Services Total Production per capita, but it also had negative significant effects on Agriculture Total Production per capita. Our results suggest that this subsidy policy achieve its goal of promoting regional economic growth, even though it may have provoked mis-allocation of resources among economic sectors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The initial endogenous growth models emphasized the importance of externaI effects in explaining sustainable growth across time. Empirically, this hypothesis can be confirmed if the coefficient of physical capital per hour is unity in the aggregate production function. Although cross-section results concur with theory, previous estimates using time series data rejected this hypothesis, showing a small coefficient far from unity. It seems that the problem lies not with the theory but with the techniques employed, which are unable to capture low frequency movements in high frequency data. This paper uses cointegration - a technique designed to capture the existence of long-run relationships in multivariate time series - to test the externalities hypothesis of endogenous growth. The results confirm the theory' and conform to previous cross-section estimates. We show that there is long-run proportionality between output per hour and a measure of capital per hour. U sing this result, we confmn the hypothesis that the implied Solow residual can be explained by government expenditures on infra-structure, which suggests a supply side role for government affecting productivity and a decrease on the extent that the Solow residual explains the variation of output.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this paper is to test whether or not there was evidence of contagion across the various financial crises that assailed some countries in the 1990s. Data on sovereign debt bonds for Brazil, Mexico, Russia and Argentina were used to implement the test. The contagion hypothesis is tested using multivariate volatility models. If there is any evidence of structural break in volatility that can be linked to financial crises, the contagion hypothesis will be confirmed. Results suggest that there is evidence in favor of the contagion hypothesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper investigates whether or not multivariate cointegrated process with structural change can describe the Brazilian term structure of interest rate data from 1995 to 2006. In this work the break point and the number of cointegrated vector are assumed to be known. The estimated model has four regimes. Only three of them are statistically different. The first starts at the beginning of the sample and goes until September of 1997. The second starts at October of 1997 until December of 1998. The third starts at January of 1999 and goes until the end of the sample. It is used monthly data. Models that allows for some similarities across the regimes are also estimated and tested. The models are estimated using the Generalized Reduced-Rank Regressions developed by Hansen (2003). All imposed restrictions can be tested using likelihood ratio test with standard asymptotic 1 qui-squared distribution. The results of the paper show evidence in favor of the long run implications of the expectation hypothesis for Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes unit tests based on partially adaptive estimation. The proposed tests provide an intermediate class of inference procedures that are more efficient than the traditional OLS-based methods and simpler than unit root tests based on fully adptive estimation using nonparametric methods. The limiting distribution of the proposed test is a combination of standard normal and the traditional Dickey-Fuller (DF) distribution, including the traditional ADF test as a special case when using Gaussian density. Taking into a account the well documented characteristic of heavy-tail behavior in economic and financial data, we consider unit root tests coupled with a class of partially adaptive M-estimators based on the student-t distributions, wich includes te normal distribution as a limiting case. Monte Carlo Experiments indicate that, in the presence of heavy tail distributions or innovations that are contaminated by outliers, the proposed test is more powerful than the traditional ADF test. We apply the proposed test to several macroeconomic time series that have heavy-tailed distributions. The unit root hypothesis is rejected in U.S. real GNP, supporting the literature of transitory shocks in output. However, evidence against unit roots is not found in real exchange rate and nominal interest rate even haevy-tail is taken into a account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we show that the widely used stationarity tests such as the KPSS test have power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in subsamples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we show that the widely used stationarity tests such as the KPSS test has power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of covariance stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in sub-samples.