911 resultados para Bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the performance optimization problem in a single-stationmulticlass queueing network with changeover times by means of theachievable region approach. This approach seeks to obtainperformance bounds and scheduling policies from the solution of amathematical program over a relaxation of the system's performanceregion. Relaxed formulations (including linear, convex, nonconvexand positive semidefinite constraints) of this region are developedby formulating equilibrium relations satisfied by the system, withthe help of Palm calculus. Our contributions include: (1) newconstraints formulating equilibrium relations on server dynamics;(2) a flow conservation interpretation of the constraintspreviously derived by the potential function method; (3) newpositive semidefinite constraints; (4) new work decomposition lawsfor single-station multiclass queueing networks, which yield newconvex constraints; (5) a unified buffer occupancy method ofperformance analysis obtained from the constraints; (6) heuristicscheduling policies from the solution of the relaxations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precise estimation of propagation parameters inprecipitation media is of interest to improve the performanceof communications systems and in remote sensing applications.In this paper, we present maximum-likelihood estimators ofspecific attenuation and specific differential phase in rain. Themodel used for obtaining the cited estimators assumes coherentpropagation, reflection symmetry of the medium, and Gaussianstatistics of the scattering matrix measurements. No assumptionsabout the microphysical properties of the medium are needed.The performance of the estimators is evaluated through simulateddata. Results show negligible estimators bias and variances closeto Cramer–Rao bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[eng] In the context of cooperative TU-games, and given an order of players, we consider the problem of distributing the worth of the grand coalition as a sequentia decision problem. In each step of process, upper and lower bounds for the payoff of the players are required related to successive reduced games. Sequentially compatible payoffs are defined as those allocation vectors that meet these recursive bounds. The core of the game is reinterpreted as a set of sequentally compatible payoffs when the Davis-Maschler reduced game is considered (Th.1). Independently of the reduction, the core turns out to be the intersections of the family of the sets of sequentially compatible payoffs corresponding to the different possible orderings (Th.2), so it is in some sense order-independent. Finally, we analyze advantagenous properties for the first player

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The As Pontes basin (12 km2), NW Iberian Peninsula, is bounded by a double restraining bend of a dextral strike-slip fault, which is related to the western onshore end of the Pyrenean belt. Surface and subsurface data obtained from intensive coal exploration and mining in the basin since the 1960s together with additional structural and stratigraphic sequence analysis allowed us to determine the geometric relationships between tectonic structures and stratigraphic markers. The small size of the basin and the large amount of quality data make the As Pontes basin a unique natural laboratory for improving our understanding of the origin and evolution of restraining bends. The double restraining bend is the end stage of the structural evolution of a compressive underlapping stepover, where the basin was formed. During the first stage (stepover stage), which began ca. 30 Ma ago (latest Rupelian) and lasted 3.4 My, two small isolated basins bounded by thrusts and normal faults were formed. For 1.3 My, the strike-slip faults, which defined the stepover, grew towards each other until joining and forming the double restraining bend, which bounds one large As Pontes basin (transition stage). The history of the basin was controlled by the activity of the double restraining bend for a further 3.4 My (restraining bend stage) and ended in mid-Aquitanian times (ca. 22 Ma).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced for its pure and mixed states: robustness. It corresponds to the minimal amount of mixing with locally prepared states which washes out all entanglement. It quantifies in a sense the endurance of entanglement against noise and jamming. Its properties are studied comprehensively. Analytical expressions for the robustness are given for pure states of two-party systems, and analytical bounds for mixed states of two-party systems. Specific results are obtained mainly for the qubit-qubit system (qubit denotes quantum bit). As by-products local pseudomixtures are generalized, a lower bound for the relative volume of separable states is deduced, and arguments for considering convexity a necessary condition of any entanglement measure are put forward.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report presents systematic empirical annotation of transcript products from 399 annotated protein-coding loci across the 1% of the human genome targeted by the Encyclopedia of DNA elements (ENCODE) pilot project using a combination of 5' rapid amplification of cDNA ends (RACE) and high-density resolution tiling arrays. We identified previously unannotated and often tissue- or cell-line-specific transcribed fragments (RACEfrags), both 5' distal to the annotated 5' terminus and internal to the annotated gene bounds for the vast majority (81.5%) of the tested genes. Half of the distal RACEfrags span large segments of genomic sequences away from the main portion of the coding transcript and often overlap with the upstream-annotated gene(s). Notably, at least 20% of the resultant novel transcripts have changes in their open reading frames (ORFs), most of them fusing ORFs of adjacent transcripts. A significant fraction of distal RACEfrags show expression levels comparable to those of known exons of the same locus, suggesting that they are not part of very minority splice forms. These results have significant implications concerning (1) our current understanding of the architecture of protein-coding genes; (2) our views on locations of regulatory regions in the genome; and (3) the interpretation of sequence polymorphisms mapping to regions hitherto considered to be "noncoding," ultimately relating to the identification of disease-related sequence alterations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We estimate the attainable limits on the coupling of a nonstandard Higgs boson to two photons taking into account the data collected by the Fermilab collaborations on diphoton events. We based our analysis on a general set of dimension-6 effective operators that give rise to anomalous couplings in the bosonic sector of the standard model. If the coefficients of all blind operators have the same magnitude, indirect bounds on the anomalous triple vector-boson couplings can also be inferred, provided there is no large cancellation in the Higgs-gamma-gamma coupling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We estimate the attainable limits on the coefficients of dimension-6 operators from the analysis of Higgs boson phenomenology, in the framework of a SUL(2)UY(1) gauge-invariant effective Lagrangian. Our results, based on the data sample already collected by the collaborations at Fermilab Tevatron, show that the coefficients of Higgs-vector boson couplings can be determined with unprecedented accuracy. Assuming that the coefficients of all blind operators are of the same magnitude, we are also able to impose more restrictive bounds on the anomalous vector-boson triple couplings than the present limit from double gauge boson production at the Tevatron collider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluate the probability that a loop of string that has spontaneously nucleated during inflation will form a black hole upon collapse, after the end of inflation. We then use the observational bounds on the density of primordial black holes to put constraints on the parameters of the model. Combining these constraints with current upper limits on the expansion rate during inflation, we conclude that the density of black holes formed from nucleating strings is too low to be observed. Also, constraints on domain wall nucleation and monopole pair production during inflation are briefly discussed.