245 resultados para Lagrangian bounds
Resumo:
Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.
Resumo:
We present two online algorithms for maintaining a topological order of a directed n-vertex acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm handles m arc additions in O(m(3/2)) time. For sparse graphs (m/n = O(1)), this bound improves the best previous bound by a logarithmic factor, and is tight to within a constant factor among algorithms satisfying a natural locality property. Our second algorithm handles an arbitrary sequence of arc additions in O(n(5/2)) time. For sufficiently dense graphs, this bound improves the best previous bound by a polynomial factor. Our bound may be far from tight: we show that the algorithm can take Omega(n(2)2 root(2lgn)) time by relating its performance to a generalization of the k-levels problem of combinatorial geometry. A completely different algorithm running in Theta (n(2) log n) time was given recently by Bender, Fineman, and Gilbert. We extend both of our algorithms to the maintenance of strong components, without affecting the asymptotic time bounds.
Resumo:
The factorization theorem for exclusive processes in perturbative QCD predicts the behavior of the pion electromagnetic form factor F(t) at asymptotic spacelike momenta t(= -Q(2)) < 0. We address the question of the onset energy using a suitable mathematical framework of analytic continuation, which uses as input the phase of the form factor below the first inelastic threshold, known with great precision through the Fermi-Watson theorem from pi pi elastic scattering, and the modulus measured from threshold up to 3 GeV by the BABAR Collaboration. The method leads to almost model-independent upper and lower bounds on the spacelike form factor. Further inclusion of the value of the charge radius and the experimental value at -2.45 GeV2 measured at JLab considerably increases the strength of the bounds in the region Q(2) less than or similar to 10 GeV2, excluding the onset of the asymptotic perturbative QCD regime for Q(2) < 7 GeV2. We also compare the bounds with available experimental data and with several theoretical models proposed for the low and intermediate spacelike region.
Resumo:
Analyticity and unitarity techniques are employed to obtain bounds on the shape parameters of the scalar and vector form factors of semileptonic K l3 decays. For this purpose we use vector and scalar correlators evaluated in pQCD, a low energy theorem for scalar form factor, lattice results for the ratio of kaon and pion decay constants, chiral perturbation theory calculations for the scalar form factor at the Callan-Treiman point and experimental information on the phase and modulus of Kπ form factors up to an energy t in = 1GeV 2. We further derive regions on the real axis and in the complex-energy plane where the form factors cannot have zeros.
Resumo:
The rainbow connection number of a connected graph is the minimum number of colors needed to color its edges, so that every pair of its vertices is connected by at least one path in which no two edges are colored the same. In this article we show that for every connected graph on n vertices with minimum degree delta, the rainbow connection number is upper bounded by 3n/(delta + 1) + 3. This solves an open problem from Schiermeyer (Combinatorial Algorithms, Springer, Berlin/Hiedelberg, 2009, pp. 432437), improving the previously best known bound of 20n/delta (J Graph Theory 63 (2010), 185191). This bound is tight up to additive factors by a construction mentioned in Caro et al. (Electr J Combin 15(R57) (2008), 1). As an intermediate step we obtain an upper bound of 3n/(delta + 1) - 2 on the size of a connected two-step dominating set in a connected graph of order n and minimum degree d. This bound is tight up to an additive constant of 2. This result may be of independent interest. We also show that for every connected graph G with minimum degree at least 2, the rainbow connection number, rc(G), is upper bounded by Gc(G) + 2, where Gc(G) is the connected domination number of G. Bounds of the form diameter(G)?rc(G)?diameter(G) + c, 1?c?4, for many special graph classes follow as easy corollaries from this result. This includes interval graphs, asteroidal triple-free graphs, circular arc graphs, threshold graphs, and chain graphs all with minimum degree delta at least 2 and connected. We also show that every bridge-less chordal graph G has rc(G)?3.radius(G). In most of these cases, we also demonstrate the tightness of the bounds.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
Unlike zero-sum stochastic games, a difficult problem in general-sum stochastic games is to obtain verifiable conditions for Nash equilibria. We show in this paper that by splitting an associated non-linear optimization problem into several sub-problems, characterization of Nash equilibria in a general-sum discounted stochastic games is possible. Using the aforementioned sub-problems, we in fact derive a set of necessary and sufficient verifiable conditions (termed KKT-SP conditions) for a strategy-pair to result in Nash equilibrium. Also, we show that any algorithm which tracks the zero of the gradient of the Lagrangian of every sub-problem provides a Nash strategy-pair. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The K pi form factors are investigated at low energies by the method of unitarity bounds adapted so as to include information on the phase and modulus along the elastic region of the unitarity cut. Using as input the values of the form factors at t = 0, and at the Callan-Treiman point in the scalar case, stringent constraints are obtained on the slope and curvature parameters of the Taylor expansion at the origin.
Resumo:
Two models for AF relaying, namely, fixed gain and fixed power relaying, have been extensively studied in the literature given their ability to harness spatial diversity. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay channel gain. In fixed power relaying, the relay transmit power is fixed, but its gain varies. We revisit and generalize the fundamental two-hop AF relaying model. We present an optimal scheme in which an average power constrained AF relay adapts its gain and transmit power to minimize the symbol error probability (SEP) at the destination. Also derived are insightful and practically amenable closed-form bounds for the optimal relay gain. We then analyze the SEP of MPSK, derive tight bounds for it, and characterize the diversity order for Rayleigh fading. Also derived is an SEP approximation that is accurate to within 0.1 dB. Extensive results show that the scheme yields significant energy savings of 2.0-7.7 dB at the source and relay. Optimal relay placement for the proposed scheme is also characterized, and is different from fixed gain or power relaying. Generalizations to MQAM and other fading distributions are also discussed.
Resumo:
We characterise higher order Riesz transforms on the Heisenberg group and also show that they satisfy dimension-free bounds under some assumptions on the multipliers. Using transference theorems, we deduce boundedness theorems for Riesz transforms on the reduced Heisenberg group and hence also for the Riesz transforms associated to multiple Hermite and Laguerre expansions.
Resumo:
A unit cube in (or a k-cube in short) is defined as the Cartesian product R (1) x R (2) x ... x R (k) where R (i) (for 1 a parts per thousand currency sign i a parts per thousand currency sign k) is a closed interval of the form a (i) , a (i) + 1] on the real line. A k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that two vertices in G are adjacent if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G is the minimum k such that G has a k-cube representation. From a geometric embedding point of view, a k-cube representation of G = (V, E) yields an embedding such that for any two vertices u and v, ||f(u) - f(v)||(a) a parts per thousand currency sign 1 if and only if . We first present a randomized algorithm that constructs the cube representation of any graph on n vertices with maximum degree Delta in O(Delta ln n) dimensions. This algorithm is then derandomized to obtain a polynomial time deterministic algorithm that also produces the cube representation of the input graph in the same number of dimensions. The bandwidth ordering of the graph is studied next and it is shown that our algorithm can be improved to produce a cube representation of the input graph G in O(Delta ln b) dimensions, where b is the bandwidth of G, given a bandwidth ordering of G. Note that b a parts per thousand currency sign n and b is much smaller than n for many well-known graph classes. Another upper bound of b + 1 on the cubicity of any graph with bandwidth b is also shown. Together, these results imply that for any graph G with maximum degree Delta and bandwidth b, the cubicity is O(min{b, Delta ln b}). The upper bound of b + 1 is used to derive upper bounds for the cubicity of circular-arc graphs, cocomparability graphs and AT-free graphs in terms of the maximum degree Delta.
Resumo:
We introduce the defect sequence for a contractive tuple of Hilbert space operators and investigate its properties. The defect sequence is a sequence of numbers, called defect dimensions associated with a contractive tuple. We show that there are upper bounds for the defect dimensions. The tuples for which these upper bounds are obtained, are called maximal contractive tuples. The upper bounds are different in the non-commutative and in the commutative case. We show that the creation operators on the full Fock space and the coordinate multipliers on the Drury-Arveson space are maximal. We also study pure tuples and see how the defect dimensions play a role in their irreducibility. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.
Resumo:
Our work is motivated by geographical forwarding of sporadic alarm packets to a base station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used for the local problem is based on the end-to-end total cost objective (for instance, when the total cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and the reward values, but knows the probability distributions of these quantities. At each relay wake-up instant, when a relay reveals its reward value, the forwarding node's problem is to forward the packet or to wait for further relays to wake-up. In terms of the operations research literature, our work can be considered as a variant of the asset selling problem. We formulate our local forwarding problem as a partially observable Markov decision process (POMDP) and obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity involved in the policies derived out of these bounds, we formulate an alternate simplified model, the optimal policy for which is a simple threshold rule. We provide simulation results to compare the performance of the inner and outer bound policies against the simple policy, and also against the optimal policy when the source knows the exact number of relays. Observing the good performance and the ease of implementation of the simple policy, we apply it to our motivating problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare the end-to-end performance (i.e., average total delay and average total cost) obtained by the simple policy, when used for local geographical forwarding, against that obtained by the globally optimal forwarding algorithm proposed by Kim et al. 1].
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.