7 resultados para inverse probability weights

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use new data on cyclically adjusted primary balances for Latin America and the Caribbean to estimate e ects of scal consolidations on GDP and some of its components. Identi cation is conducted through a doubly-robust estimation procedure that controls for non-randomness in the "treatment assignment" by inverse probability weighting and impulse responses are generated by local projections. Results suggest output contraction by more than one percent on impact, with economy starting to recover from the second year on. Composition e ects indicate that revenue-based adjustments are way more contractionary than expenditure-based ones. Disentangling efects between demand components, we nd consumption being in general less responsive to consolidations than investment, although nonlinearities associated to initial levels of debt and taxation might play an important role.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents new indices for measuring the industry concentration. The indices proposed (C n ) are of a normative type because they embody (endogenous) weights matching the market shares of the individual firms to their Marshallian welfare shares. These indices belong to an enlarged class of the Performance Gradient Indexes introduced by Dansby&Willig(I979). The definition of Cn for the consumers allows a new interpretation for the Hirschman-Herfindahl index (H), which can be viewed as a normative index according to particular values of the demand parameters. For homogeneous product industries, Cn equates H for every market distribution if (and only if) the market demand is linear. Whenever the inverse demand curve is convex (concave), H underestimates( overestimates) the industry concentration measured by the normative indexo For these industries, H overestimates (underestimates) the concentration changes caused by market transfers among small firms if the inverse demand curve is convex(concave) and underestimates( overestimates) it when such tranfers benefit a large firm, according to the convexity (or the concavity) of the demand curve. For heterogeneous product industries, an explicit normative index is obtained with a market demand derived from a quasi-linear utilility function. Under symmetric preferences among the goods, the index Cn is always greater than or equal the H-index. Under asymmetric assumptions, discrepancies between the firms' market distribution and the differentiationj substitution distributions among the goods, increase the concentration but make room for some horizontal mergers do reduce it. In particular, a mean preserving spread of the differentiation(substitution) increases(decreases) the concentration only if the smaller firms' goods become more(less) differentiated(substitute) w.r.t. the other goods. One important consequence of these results is that the consumers are benefitted when the smaller firms are producing weak substitute goods, and the larger firms produce strong substitute goods or face demand curves weakly sensitive to their own prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The well-known inverse relationship between farm-size and productivity is usually explained in terms of diminishing returns with respect to land and other inputs coupled with various types of market frictions that prevent the efficient allocation of land across farms. We show that even if in the absence of diminishing returns one can provide an alternative explanation for this phenomenon using endogenous occupational choice and heterogeneity with respect to farming skills.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper I will investigate the conditions under which a convex capacity (or a non-additive probability which exhibts uncertainty aversion) can be represented as a squeeze of a(n) (additive) probability measure associate to an uncertainty aversion function. Then I will present two alternatives forrnulations of the Choquet integral (and I will extend these forrnulations to the Choquet expected utility) in a parametric approach that will enable me to do comparative static exercises over the uncertainty aversion function in an easy way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations