854 resultados para nonparametric bounds
Resumo:
This paper applies the theoretical literature on nonparametric bounds ontreatment effects to the estimation of how limited English proficiency (LEP)affects wages and employment opportunities for Hispanic workers in theUnited States. I analyze the identifying power of several weak assumptionson treatment response and selection, and stress the interactions between LEPand education, occupation and immigration status. I show that thecombination of two weak but credible assumptions provides informative upperbounds on the returns to language skills for certain subgroups of thepopulation. Adding age at arrival as a monotone instrumental variable alsoprovides informative lower bounds.
Resumo:
Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.
Resumo:
This paper discusses how to identify individual-specific causal effects of an ordered discrete endogenous variable. The counterfactual heterogeneous causal information is recovered by identifying the partial differences of a structural relation. The proposed refutable nonparametric local restrictions exploit the fact that the pattern of endogeneity may vary across the level of the unobserved variable. The restrictions adopted in this paper impose a sense of order to an unordered binary endogeneous variable. This allows for a uni.ed structural approach to studying various treatment effects when self-selection on unobservables is present. The usefulness of the identi.cation results is illustrated using the data on the Vietnam-era veterans. The empirical findings reveal that when other observable characteristics are identical, military service had positive impacts for individuals with low (unobservable) earnings potential, while it had negative impacts for those with high earnings potential. This heterogeneity would not be detected by average effects which would underestimate the actual effects because different signs would be cancelled out. This partial identification result can be used to test homogeneity in response. When homogeneity is rejected, many parameters based on averages may deliver misleading information.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
We consider the problem of testing whether the observations X1, ..., Xn of a time series are independent with unspecified (possibly nonidentical) distributions symmetric about a common known median. Various bounds on the distributions of serial correlation coefficients are proposed: exponential bounds, Eaton-type bounds, Chebyshev bounds and Berry-Esséen-Zolotarev bounds. The bounds are exact in finite samples, distribution-free and easy to compute. The performance of the bounds is evaluated and compared with traditional serial dependence tests in a simulation experiment. The procedures proposed are applied to U.S. data on interest rates (commercial paper rate).
Resumo:
Bounds on the exchange-correlation energy of many-electron systems are derived and tested. By using universal scaling properties of the electron-electron interaction, we obtain the exponent of the bounds in three, two, one, and quasione dimensions. From the properties of the electron gas in the dilute regime, the tightest estimate to date is given for the numerical prefactor of the bound, which is crucial in practical applications. Numerical tests on various low-dimensional systems are in line with the bounds obtained and give evidence of an interesting dimensional crossover between two and one dimensions.
Resumo:
We show that commutative group spherical codes in R(n), as introduced by D. Slepian, are directly related to flat tori and quotients of lattices. As consequence of this view, we derive new results on the geometry of these codes and an upper bound for their cardinality in terms of minimum distance and the maximum center density of lattices and general spherical packings in the half dimension of the code. This bound is tight in the sense it can be arbitrarily approached in any dimension. Examples of this approach and a comparison of this bound with Union and Rankin bounds for general spherical codes is also presented.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
This article considers alternative methods to calculate the fair premium rate of crop insurance contracts based on county yields. The premium rate was calculated using parametric and nonparametric approaches to estimate the conditional agricultural yield density. These methods were applied to a data set of county yield provided by the Statistical and Geography Brazilian Institute (IBGE), for the period of 1990 through 2002, for soybean, corn and wheat, in the State of Paran. In this article, we propose methodological alternatives to pricing crop insurance contracts resulting in more accurate premium rates in a situation of limited data.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
The integral of the Wigner function over a subregion of the phase space of a quantum system may be less than zero or greater than one. It is shown that for systems with 1 degree of freedom, the problem of determining the best possible upper and lower bounds on such an integral, over an possible states, reduces to the problem of finding the greatest and least eigenvalues of a Hermitian operator corresponding to the subregion. The problem is solved exactly in the case of an arbitrary elliptical region. These bounds provide checks on experimentally measured quasiprobability distributions.
Resumo:
Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.
Resumo:
Widely used ''purchasing power parity'' comparisons of per capita GDP are not true quantity indexes and are subject to systematic substitution bins. This bias may distort measurement of convergence and divergence. Extending Varian's nonparametric construction of a true index gives the set of true indexes, including the new Ideal Afriat Index. These indexes are utility-consistent and independent of arbitrary reference price vectors. We establish bounds on the dispersion of true multilateral indexes, hence bounds on convergence. International price indexes understate both true GDP dispersion and, where prices are converging over time, the rate of true quantity convergence.