190 resultados para gap, minproblem, algoritmi, esatti, lower, bound, posta


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by the viscosity bound in gauge/gravity duality, we consider the ratio of shear viscosity (eta) to entropy density (s) in black hole accretion flows. We use both an ideal gas equation of state and the QCD equation of state obtained from lattice for the fluid accreting onto a Kerr black hole. The QCD equation of state is considered since the temperature of accreting matter is expected to approach 10(12) K in certain hot flows. We find that in both the cases eta/s is small only for primordial black holes and several orders of magnitude larger than any known fluid for stellar and supermassive black holes. We show that a lower bound on the mass of primordial black holes leads to a lower bound on eta/s and vice versa. Finally we speculate that the Shakura-Sunyaev viscosity parameter should decrease with increasing density and/or temperatures. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

String theory and gauge/gravity duality suggest the lower bound of shear viscosity (eta) to entropy density (s) for any matter to be mu h/4 pi k(B), when h and k(B) are reduced Planck and Boltzmann constants respectively and mu <= 1. Motivated by this, we explore eta/s in black hole accretion flows, in order to understand if such exotic flows could be a natural site for the lowest eta/s. Accretion flow plays an important role in black hole physics in identifying the existence of the underlying black hole. This is a rotating shear flow with insignificant molecular viscosity, which could however have a significant turbulent viscosity, generating transport, heat and hence entropy in the flow. However, in presence of strong magnetic field, magnetic stresses can help in transporting matter independent of viscosity, via celebrated Blandford-Payne mechanism. In such cases, energy and then entropy produces via Ohmic dissipation. In,addition, certain optically thin, hot, accretion flows, of temperature greater than or similar to 10(9) K, may be favourable for nuclear burning which could generate/absorb huge energy, much higher than that in a star. We find that eta/s in accretion flows appears to be close to the lower bound suggested by theory, if they are embedded by strong magnetic field or producing nuclear energy, when the source of energy is not viscous effects. A lower bound on eta/s also leads to an upper bound on the Reynolds number of the flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present paper, based on the principles of gauge/gravity duality we analytically compute the shear viscosity to entropy (eta/s) ratio corresponding to the super fluid phase in Einstein Gauss-Bonnet gravity. From our analysis we note that the ratio indeed receives a finite temperature correction below certain critical temperature (T < T-c). This proves the non universality of eta/s ratio in higher derivative theories of gravity. We also compute the upper bound for the Gauss-Bonnet coupling (lambda) corresponding to the symmetry broken phase and note that the upper bound on the coupling does not seem to change as long as we are close to the critical point of the phase diagram. However the corresponding lower bound of the eta/s ratio seems to get modified due to the finite temperature effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel design for the geometric configuration of honeycombs using a seamless combination of auxetic and conventional cores- elements with negative and positive Possion ratios respectively, has been presented. The proposed design has been shown to generate a superior band gap property while retaining all major advantages of a purely conventional or purely auxetic honeycomb structure. Seamless combination ensures that joint cardinality is also retained. Several configurations involving different degree of auxeticity and different proportions auxetic and conventional elements have been analyzed. It has been shown that the preferred configurations open up wide and clean band gap at a significantly lower frequency ranges compared to their pure counterparts. In view of existence of band gaps being desired feature for the phononic applications, reported results might be appealing. Use of such design may enable superior vibration control as well. Proposed configurations can be made isovolumic and iso-weight giving designers a fairer ground of applying such configurations without significantly changing size and weight criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of representing a univariate polynomial f(x) as a sum of powers of low degree polynomials. We prove a lower bound of Omega(root d/t) for writing an explicit univariate degree-d polynomial f(x) as a sum of powers of degree-t polynomials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the secondary structure of RNA determined by Watson-Crick pairing without pseudo-knots using Milnor invariants of links. We focus on the first non-trivial invariant, which we call the Heisenber invariant. The Heisenberg invariant, which is an integer, can be interpreted in terms of the Heisenberg group as well as in terms of lattice paths. We show that the Heisenberg invariant gives a lower bound on the number of unpaired bases in an RNA secondary structure. We also show that the Heisenberg invariant can predict allosteric structures for RNA. Namely, if the Heisenberg invariant is large, then there are widely separated local maxima (i.e., allosteric structures) for the number of Watson-Crick pairs found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bearing capacity factor N-c for axially loaded piles in clays whose cohesion increases linearly with depth has been estimated numerically under undrained (phi=0) condition. The Study follows the lower bound limit analysis in conjunction With finite elements and linear programming. A new formulation is proposed for solving an axisymmetric geotechnical stability problem. The variation of N-c with embedment ratio is obtained for several rates of the increase of soil cohesion with depth; a special case is also examined when the pile base was placed on the stiff clay stratum overlaid by a soft clay layer. It was noticed that the magnitude of N-c reaches almost a constant value for embedment ratio greater than unity. The roughness of the pile base and shaft affects marginally the magnitudes of N-c. The results obtained from the present study are found to compare quite well with the different numerical solutions reported in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the problem of designing reverse channel training sequences for a TDD-MIMO spatial-multiplexing system. Assuming perfect channel state information at the receiver and spatial multiplexing at the transmitter with equal power allocation to them dominant modes of the estimated channel, the pilot is designed to ensure an stimate of the channel which improves the forward link capacity. Using perturbation techniques, a lower bound on the forward link capacity is derived with respect to which the training sequence is optimized. Thus, the reverse channel training sequence makes use of the channel knowledge at the receiver. The performance of orthogonal training sequence with MMSE estimation at the transmitter and the proposed training sequence are compared. Simulation results show a significant improvement in performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Suclakov and Zaks (and earlier by Fiamcik) that a'(G) <= Delta+2, where Delta = Delta(G) denotes the maximum degree of the graph. Alon et al. also raised the question whether the complete graphs of even order are the only regular graphs which require Delta+2 colors to be acyclically edge colored. In this article, using a simple counting argument we observe not only that this is not true, but in fact all d-regular graphs with 2n vertices and d>n, requires at least d+2 colors. We also show that a'(K-n,K-n) >= n+2, when n is odd using a more non-trivial argument. (Here K-n,K-n denotes the complete bipartite graph with n vertices on each side.) This lower bound for Kn,n can be shown to be tight for some families of complete bipartite graphs and for small values of n. We also infer that for every d, n such that d >= 5, n >= 2d+3 and dn even, there exist d-regular graphs which require at least d+2-colors to be acyclically edge colored. (C) 2009 Wiley Periodicals, Inc. J Graph Theory 63: 226-230, 2010.