79 resultados para schooling, productivity effects, upper bound
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
We study how restrictions on firm entry affect intersectoral factor reallocation when openeconomies experience global economic shocks. In our theoretical framework, countries trade freelyin a range of differentiated sectors that are subject to country-specific and global shocks. Entryrestrictions are modeled as an upper bound on the introduction of new differentiated goods followingshocks. Prices and quantities adjust to clear international goods markets, and wages adjustto clear national labor markets. We show that in general equilibrium, countries with tighter entryrestrictions see less factor reallocation compared to the frictionless benchmark. In our empiricalwork, we compare sectoral employment reallocation across countries in the 1980s and 1990s withproxies for frictionless benchmark reallocation. Our results indicate that the gap between actualand frictionless reallocation is greater in countries where it takes longer to start a firm.
Resumo:
Does the labor market place wage premia on jobs that involve physical strain,job, insecurity or bad regulation of hours? This paper derives bounds on themonetary returns to these job disamenities in the West German labor market.We show that in a market with dispersion in both job characteristics andwages, the average wage change of workers who switch jobs voluntarily and optfor consuming more (less) disamenities,provides an upper (lower) bound on themarket return to the disamenity. Using longitudinal information from workersin the German Socio Economic Panel, we estimate an upper bound of 5% and alower bound of 3.5% for the market return to work strain in a job.
Resumo:
Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.
Resumo:
This paper studies the transaction cost savings of moving froma multi-currency exchange system to a single currency one. Theanalysis concentrates exclusively on the transaction andprecautionary demand for money and abstracts from any othermotives to hold currency. A continuous-time, stochastic Baumol-like model similar to that in Frenkel and Jovanovic (1980) isgeneralized to include several currencies and calibrated to fitEuropean data. The analysis implies an upper bound for thesavings associated with reductions of transaction costs derivedfrom the European Monetary Union of approximately 0.6\% of theCommunity GDP. Additionally, the magnitudes of the brokeragefee and the volatility of transactions, whose estimation hastraditionally been difficult to address empirically, areapproximated for Europe.
Resumo:
We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
Upper bounds for the Betti numbers of generalized Cohen-Macaulay ideals are given. In particular, for the case of non-degenerate, reduced and ir- reducible projective curves we get an upper bound which only depends on their degree.
Resumo:
The purpose of this paper is two fold. First, we give an upper bound on the orderof a multisecant line to an integral arithmetically Cohen-Macaulay subscheme in Pn of codimension two in terms of the Hilbert function. Secondly, we givean explicit description of the singular locus of the blow up of an arbitrary local ring at a complete intersection ideal. This description is used to refine standardlinking theorem. These results are tied together by the construction of sharp examples for the bound, which uses the linking theorems.
Resumo:
Using the experimental values of the chemical potentials of liquid 4He and of a 3He impurity in liquid 4He, we derive a model-independent lower (upper) bound to the kinetic (potential) energy per particle at zero temperature. The values of the bounds at the experimental saturation density are 13.42 K for the kinetic energy and -20.59 K for the potential energy. All the theoretical calculations based on the Lennard-Jones potential violate the upper-bound condition for the potential energy.
Resumo:
A new coding technique to be used in steganography is evaluated. The performanceof this new technique is computed and comparisons with the well-known theoreticalupper bound, Hamming upper bound and basic LSB are established.
Resumo:
By modifying a domain first suggested by Ruth Goodman in 1935 and by exploiting the explicit solution by Fedorov of the Polyá-Chebotarev problem in the case of four symmetrically placed points, an improved upper bound for the univalent Bloch-Landau constant is obtained. The domain that leads to this improved bound takes the form of a disk from which some arcs are removed in such a way that the resulting simply connected domain is harmonically symmetric in each arc with respect to the origin. The existence of domains of this type is established, using techniques from conformal welding, and some general properties of harmonically symmetric arcs in this setting are established.