866 resultados para Lower bounds
Resumo:
The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.
Resumo:
We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.
Resumo:
We develop the a posteriori error estimation of interior penalty discontinuous Galerkin discretizations for H(curl)-elliptic problems that arise in eddy current models. Computable upper and lower bounds on the error measured in terms of a natural (mesh-dependent) energy norm are derived. The proposed a posteriori error estimator is validated by numerical experiments, illustrating its reliability and efficiency for a range of test problems.
Resumo:
We develop the a-posteriori error analysis of hp-version interior-penalty discontinuous Galerkin finite element methods for a class of second-order quasilinear elliptic partial differential equations. Computable upper and lower bounds on the error are derived in terms of a natural (mesh-dependent) energy norm. The bounds are explicit in the local mesh size and the local degree of the approximating polynomial. The performance of the proposed estimators within an automatic hp-adaptive refinement procedure is studied through numerical experiments.
Resumo:
International audience
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
Justification Logic studies epistemic and provability phenomena by introducing justifications/proofs into the language in the form of justification terms. Pure justification logics serve as counterparts of traditional modal epistemic logics, and hybrid logics combine epistemic modalities with justification terms. The computational complexity of pure justification logics is typically lower than that of the corresponding modal logics. Moreover, the so-called reflected fragments, which still contain complete information about the respective justification logics, are known to be in~NP for a wide range of justification logics, pure and hybrid alike. This paper shows that, under reasonable additional restrictions, these reflected fragments are NP-complete, thereby proving a matching lower bound. The proof method is then extended to provide a uniform proof that the corresponding full pure justification logics are $\Pi^p_2$-hard, reproving and generalizing an earlier result by Milnikel.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
We obtain stringent bounds in the < r(2)>(K pi)(S)-c plane where these are the scalar radius and the curvature parameters of the scalar K pi form factor, respectively, using analyticity and dispersion relation constraints, the knowledge of the form factor from the well-known Callan-Treiman point m(K)(2)-m(pi)(2), as well as at m(pi)(2)-m(K)(2), which we call the second Callan-Treiman point. The central values of these parameters from a recent determination are accomodated in the allowed region provided the higher loop corrections to the value of th form factor at the second Callan-Treiman point reduce the one-loop result by about 3% with F-K/F-pi = 1.21. Such a variation in magnitude at the second Callan-Treiman point yields 0.12 fm(2) less than or similar to < r(2)>(K pi)(S) less than or similar to 0.21 fm(2) and 0.56 GeV-4 less than or similar to c less than or similar to 1.47 GeV-4 and a strong correlation between them. A smaller value of F-K/F-pi shifts both bounds to lower values.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
Transfer function coefficients (TFC) are widely used to test linear analog circuits for parametric and catastrophic faults. This paper presents closed form expressions for an upper bound on the defect level (DL) and a lower bound on fault coverage (FC) achievable in TFC based test method. The computed bounds have been tested and validated on several benchmark circuits. Further, application of these bounds to scalable RC ladder networks reveal a number of interesting characteristics. The approach adopted here is general and can be extended to find bounds of DL and FC of other parametric test methods for linear and non-linear circuits.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
Let A and B be nonsingular M-matrices. A lower bound on the minimum eigenvalue q(B circle A(-1)) for the Hadamard product of A(-1) and B, and a lower bound on the minimum eigenvalue q(A star B) for the Fan product of A and B are given. In addition, an upper bound on the spectral radius rho(A circle B) of nonnegative matrices A and B is also obtained. These bounds improve several existing results in some cases and the estimating formulas are easier to calculate for they are only depending on the entries of matrices A and B. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper elaborates on the ergodic capacity of fixed-gain amplify-and-forward (AF) dual-hop systems, which have recently attracted considerable research and industry interest. In particular, two novel capacity bounds that allow for fast and efficient computation and apply for nonidentically distributed hops are derived. More importantly, they are generic since they apply to a wide range of popular fading channel models. Specifically, the proposed upper bound applies to Nakagami-m, Weibull, and generalized-K fading channels, whereas the proposed lower bound is more general and applies to Rician fading channels. Moreover, it is explicitly demonstrated that the proposed lower and upper bounds become asymptotically exact in the high signal-to-noise ratio (SNR) regime. Based on our analytical expressions and numerical results, we gain valuable insights into the impact of model parameters on the capacity of fixed-gain AF dual-hop relaying systems. © 2011 IEEE.
Resumo:
CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.