6 resultados para Panel-data
em Duke University
Resumo:
Recent empirical findings suggest that the long-run dependence in U.S. stock market volatility is best described by a slowly mean-reverting fractionally integrated process. The present study complements this existing time-series-based evidence by comparing the risk-neutralized option pricing distributions from various ARCH-type formulations. Utilizing a panel data set consisting of newly created exchange traded long-term equity anticipation securities, or leaps, on the Standard and Poor's 500 stock market index with maturity times ranging up to three years, we find that the degree of mean reversion in the volatility process implicit in these prices is best described by a Fractionally Integrated EGARCH (FIEGARCH) model. © 1999 Elsevier Science S.A. All rights reserved.
Resumo:
There is a general presumption in the literature and among policymakers that immigrant remittances play the same role in economic development as foreign direct investment and other capital flows, but this is an open question. We develop a model of remittances based on the economics of the family that implies that remittances are not profit-driven, but are compensatory transfers, and should have a negative correlation with GDP growth. This is in contrast to the positive correlation of profit-driven capital flows with GDP growth. We test this implication of our model using a new panel data set on remittances and find a robust negative correlation between remittances and GDP growth. This indicates that remittances may not be intended to serve as a source of capital for economic development. © 2005 International Monetary Fund.
Resumo:
Does environmental regulation impair international competitiveness of pollution-intensive industries to the extent that they relocate to countries with less stringent regulation, turning those countries into "pollution havens"? We test this hypothesis using panel data on outward foreign direct investment (FDI) flows of various industries in the German manufacturing sector and account for several econometric issues that have been ignored in previous studies. Most importantly, we demonstrate that externalities associated with FDI agglomeration can bias estimates away from finding a pollution haven effect if omitted from the analysis. We include the stock of inward FDI as a proxy for agglomeration and employ a GMM estimator to control for endogenous time-varying determinants of FDI flows. Furthermore, we propose a difference estimator based on the least polluting industry to break the possible correlation between environmental regulatory stringency and unobservable attributes of FDI recipients in the cross-section. When accounting for these issues we find robust evidence of a pollution haven effect for the chemical industry. © 2008 Springer Science+Business Media B.V.
Resumo:
We investigate the applicability of the present-value asset pricing model to fishing quota markets by applying instrumental variable panel data estimation techniques to 15 years of market transactions from New Zealand's individual transferable quota (ITQ) market. In addition to the influence of current fishing rents, we explore the effect of market interest rates, risk, and expected changes in future rents on quota asset prices. The results indicate that quota asset prices are positively related to declines in interest rates, lower levels of risk, expected increases in future fish prices, and expected cost reductions from rationalization under the quota system. © 2007 American Agricultural Economics Association.
Resumo:
Today, the trend towards decentralization is far-reaching. Proponents of decentralization have argued that decentralization promotes responsive and accountable local government by shortening the distance between local representatives and their constituency. However, in this paper, I focus on the countervailing effect of decentralization on the accountability mechanism, arguing that decentralization, which increases the number of actors eligible for policy making and implementation in governance as a whole, may blur lines of responsibility, thus weakening citizens’ ability to sanction government in election. By using the ordinary least squares (OLS) interaction model based on historical panel data for 78 countries in the 2002 – 2010 period, I test the hypothesis that as the number of government tiers increases, there will be a negative interaction between the number of government tiers and decentralization policies. The regression results show empirical evidence that decentralization policies, having a positive impact on governance under a relatively simple form of multilevel governance, have no more statistically significant effects as the complexity of government structure exceeds a certain degree. In particular, this paper found that the presence of intergovernmental meeting with legally binding authority have a negative impact on governance when the complexity of government structure reaches to the highest level.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.