13 resultados para ESTIMATOR

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log-likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We exploit the distributional information contained in high-frequency intraday data in constructing a simple conditional moment estimator for stochastic volatility diffusions. The estimator is based on the analytical solutions of the first two conditional moments for the latent integrated volatility, the realization of which is effectively approximated by the sum of the squared high-frequency increments of the process. Our simulation evidence indicates that the resulting GMM estimator is highly reliable and accurate. Our empirical implementation based on high-frequency five-minute foreign exchange returns suggests the presence of multiple latent stochastic volatility factors and possible jumps. © 2002 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Does environmental regulation impair international competitiveness of pollution-intensive industries to the extent that they relocate to countries with less stringent regulation, turning those countries into "pollution havens"? We test this hypothesis using panel data on outward foreign direct investment (FDI) flows of various industries in the German manufacturing sector and account for several econometric issues that have been ignored in previous studies. Most importantly, we demonstrate that externalities associated with FDI agglomeration can bias estimates away from finding a pollution haven effect if omitted from the analysis. We include the stock of inward FDI as a proxy for agglomeration and employ a GMM estimator to control for endogenous time-varying determinants of FDI flows. Furthermore, we propose a difference estimator based on the least polluting industry to break the possible correlation between environmental regulatory stringency and unobservable attributes of FDI recipients in the cross-section. When accounting for these issues we find robust evidence of a pollution haven effect for the chemical industry. © 2008 Springer Science+Business Media B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a root-n consistent, asymptotically normal weighted least squares estimator of the coefficients in a truncated regression model. The distribution of the errors is unknown and permits general forms of unknown heteroskedasticity. Also provided is an instrumental variables based two-stage least squares estimator for this model, which can be used when some regressors are endogenous, mismeasured, or otherwise correlated with the errors. A simulation study indicates that the new estimators perform well in finite samples. Our limiting distribution theory includes a new asymptotic trimming result addressing the boundary bias in first-stage density estimation without knowledge of the support boundary. © 2007 Cambridge University Press.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© Institute of Mathematical Statistics, 2014.Motivated by recent findings in the field of consumer science, this paper evaluates the causal effect of debit cards on household consumption using population-based data from the Italy Survey on Household Income and Wealth (SHIW). Within the Rubin Causal Model, we focus on the estimand of population average treatment effect for the treated (PATT). We consider three existing estimators, based on regression, mixed matching and regression, propensity score weighting, and propose a new doubly-robust estimator. Semiparametric specification based on power series for the potential outcomes and the propensity score is adopted. Cross-validation is used to select the order of the power series. We conduct a simulation study to compare the performance of the estimators. The key assumptions, overlap and unconfoundedness, are systematically assessed and validated in the application. Our empirical results suggest statistically significant positive effects of debit cards on the monthly household spending in Italy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Four pigs, three with focal infarctions in the apical intraventricular septum (IVS) and/or left ventricular free wall (LVFW), were imaged with an intracardiac echocardiography (ICE) transducer. Custom beam sequences were used to excite the myocardium with focused acoustic radiation force (ARF) impulses and image the subsequent tissue response. Tissue displacement in response to the ARF excitation was calculated with a phase-based estimator, and transverse wave magnitude and velocity were each estimated at every depth. The excitation sequence was repeated rapidly, either in the same location to generate 40 Hz M-modes at a single steering angle, or with a modulated steering angle to synthesize 2-D displacement magnitude and shear wave velocity images at 17 points in the cardiac cycle. Both types of images were acquired from various views in the right and left ventricles, in and out of infarcted regions. In all animals, acoustic radiation force impulse (ARFI) and shear wave elasticity imaging (SWEI) estimates indicated diastolic relaxation and systolic contraction in noninfarcted tissues. The M-mode sequences showed high beat-to-beat spatio-temporal repeatability of the measurements for each imaging plane. In views of noninfarcted tissue in the diseased animals, no significant elastic remodeling was indicated when compared with the control. Where available, views of infarcted tissue were compared with similar views from the control animal. In views of the LVFW, the infarcted tissue presented as stiff and non-contractile compared with the control. In a view of the IVS, no significant difference was seen between infarcted and healthy tissue, whereas in another view, a heterogeneous infarction was seen to be presenting itself as non-contractile in systole.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social attitudes, attitudes toward financial risk and attitudes toward deferred gratification are thought to influence many important economic decisions over the life-course. In economic theory, these attitudes are key components in diverse models of behavior, including collective action, saving and investment decisions and occupational choice. The relevance of these attitudes have been confirmed empirically. Yet, the factors that influence them are not well understood. This research evaluates how these attitudes are affected by large disruptive events, namely, a natural disaster and a civil conflict, and also by an individual-specific life event, namely, having children.

By implementing rigorous empirical strategies drawing on rich longitudinal datasets, this research project advances our understanding of how life experiences shape these attitudes. Moreover, compelling evidence is provided that the observed changes in attitudes are likely to reflect changes in preferences given that they are not driven just by changes in financial circumstances. Therefore the findings of this research project also contribute to the discussion of whether preferences are really fixed, a usual assumption in economics.

In the first chapter, I study how altruistic and trusting attitudes are affected by exposure to the 2004 Indian Ocean tsunami as long as ten years after the disaster occurred. Establishing a causal relationship between natural disasters and attitudes presents several challenges as endogenous exposure and sample selection can confound the analysis. I take on these challenges by exploiting plausibly exogenous variation in exposure to the tsunami and by relying on a longitudinal dataset representative of the pre-tsunami population in two districts of Aceh, Indonesia. The sample is drawn from the Study of the Tsunami Aftermath and Recovery (STAR), a survey with data collected both before and after the disaster and especially designed to identify the impact of the tsunami. The altruistic and trusting attitudes of the respondents are measured by their behavior in the dictator and trust games. I find that witnessing closely the damage caused by the tsunami but without suffering severe economic damage oneself increases altruistic and trusting behavior, particularly towards individuals from tsunami affected communities. Having suffered severe economic damage has no impact on altruistic behavior but may have increased trusting behavior. These effects do not seem to be caused by the consequences of the tsunami on people’s financial situation. Instead they are consistent with how experiences of loss and solidarity may have shaped social attitudes by affecting empathy and perceptions of who is deserving of aid and trust.

In the second chapter, co-authored with Ryan Brown, Duncan Thomas and Andrea Velasquez, we investigate how attitudes toward financial risk are affected by elevated levels of insecurity and uncertainty brought on by the Mexican Drug War. To conduct our analysis, we pair the Mexican Family Life Survey (MxFLS), a rich longitudinal dataset ideally suited for our purposes, with a dataset on homicide rates at the month and municipality-level. The homicide rates capture well the overall crime environment created by the drug war. The MxFLS elicits risk attitudes by asking respondents to choose between hypothetical gambles with different payoffs. Our strategy to identify a causal effect has two key components. First, we implement an individual fixed effects strategy which allows us to control for all time-invariant heterogeneity. The remaining time variant heterogeneity is unlikely to be correlated with changes in the local crime environment given the well-documented political origins of the Mexican Drug War. We also show supporting evidence in this regard. The second component of our identification strategy is to use an intent-to-treat approach to shield our estimates from endogenous migration. Our findings indicate that exposure to greater local-area violent crime results in increased risk aversion. This effect is not driven by changes in financial circumstances, but may be explained instead by heightened fear of victimization. Nonetheless, we find that having greater economic resources mitigate the impact. This may be due to individuals with greater economic resources being able to avoid crime by affording better transportation or security at work.

The third chapter, co-authored with Duncan Thomas, evaluates whether attitudes toward deferred gratification change after having children. For this study we also exploit the MxFLS, which elicits attitudes toward deferred gratification (commonly known as time discounting) by asking individuals to choose between hypothetical payments at different points in time. We implement a difference-in-difference estimator to control for all time-invariant heterogeneity and show that our results are robust to the inclusion of time varying characteristics likely correlated with child birth. We find that becoming a mother increases time discounting especially in the first two years after childbirth and in particular for those women without a spouse at home. Having additional children does not have an effect and the effect for men seems to go in the opposite direction. These heterogeneous effects suggest that child rearing may affect time discounting due to generated stress or not fully anticipated spending needs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extremal quantile index is a concept that the quantile index will drift to zero (or one)

as the sample size increases. The three chapters of my dissertation consists of three

applications of this concept in three distinct econometric problems. In Chapter 2, I

use the concept of extremal quantile index to derive new asymptotic properties and

inference method for quantile treatment effect estimators when the quantile index

of interest is close to zero. In Chapter 3, I rely on the concept of extremal quantile

index to achieve identification at infinity of the sample selection models and propose

a new inference method. Last, in Chapter 4, I use the concept of extremal quantile

index to define an asymptotic trimming scheme which can be used to control the

convergence rate of the estimator of the intercept of binary response models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.

The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.

The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading

due to misclassification of unemployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.

Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.

In this work, we propose two methods for improving the efficiency of free energy calculations.

First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.

We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.

Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.

We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.

Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.

Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.