981 resultados para Polynomial time hierarchy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation. This result is achieved by showing that the complexity class NQP of Adleman, Demarrais, and Huang, a quantum analog of NP, is equal to the counting class coC=P.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to describe a new decomposition construction for perfect secret sharing schemes with graph access structures. The previous decomposition construction proposed by Stinson is a recursive method that uses small secret sharing schemes as building blocks in the construction of larger schemes. When the Stinson method is applied to the graph access structures, the number of such “small” schemes is typically exponential in the number of the participants, resulting in an exponential algorithm. Our method has the same flavor as the Stinson decomposition construction; however, the linear programming problem involved in the construction is formulated in such a way that the number of “small” schemes is polynomial in the size of the participants, which in turn gives rise to a polynomial time construction. We also show that if we apply the Stinson construction to the “small” schemes arising from our new construction, both have the same information rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The boxicity (cubicity) of a graph G, denoted by box(G) (respectively cub(G)), is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (cubes) in ℝ k . The problem of computing boxicity (cubicity) is known to be inapproximable in polynomial time even for graph classes like bipartite, co-bipartite and split graphs, within an O(n 0.5 − ε ) factor for any ε > 0, unless NP = ZPP. We prove that if a graph G on n vertices has a clique on n − k vertices, then box(G) can be computed in time n22O(k2logk) . Using this fact, various FPT approximation algorithms for boxicity are derived. The parameter used is the vertex (or edge) edit distance of the input graph from certain graph families of bounded boxicity - like interval graphs and planar graphs. Using the same fact, we also derive an O(nloglogn√logn√) factor approximation algorithm for computing boxicity, which, to our knowledge, is the first o(n) factor approximation algorithm for the problem. We also present an FPT approximation algorithm for computing the cubicity of graphs, with vertex cover number as the parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fix a prime p. Given a positive integer k, a vector of positive integers Delta = (Delta(1), Delta(2), ... , Delta(k)) and a function Gamma : F-p(k) -> F-p, we say that a function P : F-p(n) -> F-p is (k, Delta, Gamma)-structured if there exist polynomials P-1, P-2, ..., P-k : F-p(n) -> F-p with each deg(P-i) <= Delta(i) such that for all x is an element of F-p(n), P(x) = Gamma(P-1(x), P-2(x), ..., P-k(x)). For instance, an n-variate polynomial over the field Fp of total degree d factors nontrivially exactly when it is (2, (d - 1, d - 1), prod)- structured where prod(a, b) = a . b. We show that if p > d, then for any fixed k, Delta, Gamma, we can decide whether a given polynomial P(x(1), x(2), ..., x(n)) of degree d is (k, Delta, Gamma)-structured and if so, find a witnessing decomposition. The algorithm takes poly(n) time. Our approach is based on higher-order Fourier analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The isomorphisms holding in all models of the simply typed lambda calculus with surjective and terminal objects are well studied - these models are exactly the Cartesian closed categories. Isomorphism of two simple types in such a model is decidable by reduction to a normal form and comparison under a finite number of permutations (Bruce, Di Cosmo, and Longo 1992). Unfortunately, these normal forms may be exponentially larger than the original types so this construction decides isomorphism in exponential time. We show how using space-sharing/hash-consing techniques and memoization can be used to decide isomorphism in practical polynomial time (low degree, small hidden constant). Other researchers have investigated simple type isomorphism in relation to, among other potential applications, type-based retrieval of software modules from libraries and automatic generation of bridge code for multi-language systems. Our result makes such potential applications practically feasible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of minimizing the total completion time on a single batch processing machine. The set of jobs to be scheduled can be partitioned into a number of families, where all jobs in the same family have the same processing time. The machine can process at most B jobs simultaneously as a batch, and the processing time of a batch is equal to the processing time of the longest job in the batch. We analyze that properties of an optimal schedule and develop a dynamic programming algorithm of polynomial time complexity when the number of job families is fixed. The research is motivated by the problem of scheduling burn-in ovens in the semiconductor industry