904 resultados para Lipschitzian bounds
Resumo:
We consider bounds for the capacity region of the Gaussian X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. We first classify the XC into two classes, the strong XC and the mixed XC. In the strong XC, either the direct channels are stronger than the cross channels or vice-versa, whereas in the mixed XC, one of the direct channels is stronger than the corresponding cross channel and vice-versa. After this classification, we give outer bounds on the capacity region for each of the two classes. This is based on the idea that when one of the messages is eliminated from the XC, the rate region of the remaining three messages are enlarged. We make use of the Z channel, a system obtained by eliminating one message and its corresponding channel from the X channel, to bound the rate region of the remaining messages. The outer bound to the rate region of the remaining messages defines a subspace in R-+(4) and forms an outer bound to the capacity region of the XC. Thus, the outer bound to the capacity region of the XC is obtained as the intersection of the outer bounds to the four combinations of the rate triplets of the XC. Using these outer bounds on the capacity region of the XC, we derive new sum-rate outer bounds for both strong and mixed Gaussian XCs and compare them with those existing in literature. We show that the sum-rate outer bound for strong XC gives the sum-rate capacity in three out of the four sub-regions of the strong Gaussian XC capacity region. In case of mixed Gaussian XC, we recover the recent results in 11] which showed that the sum-rate capacity is achieved in two out of the three sub-regions of the mixed XC capacity region and give a simple alternate proof of the same.
Resumo:
We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.
Resumo:
We use the recently measured accurate BaBaR data on the modulus of the pion electromagnetic form factor,Fπ(t), up to an energy of 3 GeV, the I=1P-wave phase of the π π scattering ampli-tude up to the ω−π threshold, the pion charge radius known from Chiral Perturbation Theory,and the recently measured JLAB value of Fπ in the spacelike region at t=−2.45GeV2 as inputs in a formalism that leads to bounds on Fπ in the intermediate spacelike region. We compare our constraints with experimental data and with perturbative QCD along with the results of several theoretical models for the non-perturbative contribution s proposed in the literature.
Resumo:
In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.
Resumo:
The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).
Resumo:
An axis-parallel b-dimensional box is a Cartesian product R-1 x R-2 x ... x R-b where R-i is a closed interval of the form a(i),b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension b, such that G is representable as the intersection graph of boxes in b-dimensional space. Although boxicity was introduced in 1969 and studied extensively, there are no significant results on lower bounds for boxicity. In this paper, we develop two general methods for deriving lower bounds. Applying these methods we give several results, some of which are listed below: 1. The boxicity of a graph on n vertices with no universal vertices and minimum degree delta is at least n/2(n-delta-1). 2. Consider the g(n,p) model of random graphs. Let p <= 1 - 40logn/n(2.) Then with high `` probability, box(G) = Omega(np(1 - p)). On setting p = 1/2 we immediately infer that almost all graphs have boxicity Omega(n). Another consequence of this result is as follows: For any positive constant c < 1, almost all graphs on n vertices and m <= c((n)(2)) edges have boxicity Omega(m/n). 3. Let G be a connected k-regular graph on n vertices. Let lambda be the second largest eigenvalue in absolute value of the adjacency matrix of G. Then, the boxicity of G is a least (kappa(2)/lambda(2)/log(1+kappa(2)/lambda(2))) (n-kappa-1/2n). 4. For any positive constant c 1, almost all balanced bipartite graphs on 2n vertices and m <= cn(2) edges have boxicity Omega(m/n).
Resumo:
Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.
Resumo:
We consider Ricci flow invariant cones C in the space of curvature operators lying between the cones ``nonnegative Ricci curvature'' and ``nonnegative curvature operator''. Assuming some mild control on the scalar curvature of the Ricci flow, we show that if a solution to the Ricci flow has its curvature operator which satisfies R + epsilon I is an element of C at the initial time, then it satisfies R + epsilon I is an element of C on some time interval depending only on the scalar curvature control. This allows us to link Gromov-Hausdorff convergence and Ricci flow convergence when the limit is smooth and R + I is an element of C along the sequence of initial conditions. Another application is a stability result for manifolds whose curvature operator is almost in C. Finally, we study the case where C is contained in the cone of operators whose sectional curvature is nonnegative. This allows us to weaken the assumptions of the previously mentioned applications. In particular, we construct a Ricci flow for a class of (not too) singular Alexandrov spaces.
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
This paper derives outer bounds on the sum rate of the K-user MIMO Gaussian interference channel (GIC). Three outer bounds are derived, under different assumptions of cooperation and providing side information to receivers. The novelty in the derivation lies in the careful selection of side information, which results in the cancellation of the negative differential entropy terms containing signal components, leading to a tractable outer bound. The overall outer bound is obtained by taking the minimum of the three outer bounds. The derived bounds are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the generalized degrees of freedom (GDOF). The relative performance of the bounds yields insight into the performance limits of multiuser MIMO GICs and the relative merits of different schemes for interference management. These insights are confirmed by establishing the optimality of the bounds in specific cases using an inner bound on the GDOF derived by the authors in a previous work. It is also shown that many of the existing results on the GDOF of the GIC can be obtained as special cases of the bounds, e. g., by setting K = 2 or the number of antennas at each user to 1.
Resumo:
The set of all subspaces of F-q(n) is denoted by P-q(n). The subspace distance d(S)(X, Y) = dim(X) + dim(Y)-2dim(X boolean AND Y) defined on P-q(n) turns it into a natural coding space for error correction in random network coding. A subset of P-q(n) is called a code and the subspaces that belong to the code are called codewords. Motivated by classical coding theory, a linear coding structure can be imposed on a subset of P-q(n). Braun et al. conjectured that the largest cardinality of a linear code, that contains F-q(n), is 2(n). In this paper, we prove this conjecture and characterize the maximal linear codes that contain F-q(n).
Resumo:
The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.
Resumo:
We consider the problem of representing a univariate polynomial f(x) as a sum of powers of low degree polynomials. We prove a lower bound of Omega(root d/t) for writing an explicit univariate degree-d polynomial f(x) as a sum of powers of degree-t polynomials.