947 resultados para NP Complete


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is a follow up to 2, FUN 2010], which initiated a detailed analysis of the popular game of UNO (R). We consider the solitaire version of the game, which was shown to be NP-complete. In 2], the authors also demonstrate a (O)(n)(c(2)) algorithm, where c is the number of colors across all the cards, which implies, in particular that the problem is polynomial time when the number of colors is a constant. In this work, we propose a kernelization algorithm, a consequence of which is that the problem is fixed-parameter tractable when the number of colors is treated as a parameter. This removes the exponential dependence on c and answers the question stated in 2] in the affirmative. We also introduce a natural and possibly more challenging version of UNO that we call ``All Or None UNO''. For this variant, we prove that even the single-player version is NP-complete, and we show a single-exponential FPT algorithm, along with a cubic kernel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of finding an optimal vertex cover in a graph is a classic NP-complete problem, and is a special case of the hitting set question. On the other hand, the hitting set problem, when asked in the context of induced geometric objects, often turns out to be exactly the vertex cover problem on restricted classes of graphs. In this work we explore a particular instance of such a phenomenon. We consider the problem of hitting all axis-parallel slabs induced by a point set P, and show that it is equivalent to the problem of finding a vertex cover on a graph whose edge set is the union of two Hamiltonian Paths. We show the latter problem to be NP-complete, and also give an algorithm to find a vertex cover of size at most k, on graphs of maximum degree four, whose running time is 1.2637(k) n(O(1)).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of delay-constrained, energy-efficient broadcast in cooperative wireless networks is NP-complete. While centralised setting allows some heuristic solutions, designing heuristics in distributed implementation poses significant challenges. This is more so in wireless sensor networks (WSNs) where nodes are deployed randomly and topology changes dynamically due to node failure/join and environment conditions. This paper demonstrates that careful design of network infrastructure can achieve guaranteed delay bounds and energy-efficiency, and even meet quality of service requirements during broadcast. The paper makes three prime contributions. First, we present an optimal lower bound on energy consumption for broadcast that is tighter than what has been previously proposed. Next, iSteiner, a lightweight, distributed and deterministic algorithm for creation of network infrastructure is discussed. iPercolate is the algorithm that exploits this structure to cooperatively broadcast information with guaranteed delivery and delay bounds, while allowing real-time traffic to pass undisturbed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.

In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.

If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.

Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.

This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.

In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

数据分配是研究数据如何分布到多个物理节点的NP-Complete问题.给出数据分配算法的数学模型,提出基于时序片段评价的数据分配算法——DATE.该算法利用数据在短时域访问量分布不均的特点,将多目标优化问题转化为单一目标求解,采用蜜蜂算法(collective Honey bee behavior)调整参数并反馈算法结果,以实现系统负载均衡.随机实验结果表明,DATE相比于同类Random,roundrobin,Bubba算法在系统总时段均衡ET、系统时段内均衡值ES、系统最大波峰值EM 3个指标中表现更优.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

本文将给出 0-1 背包(Knapsack)问题的几个近似算法,它们都是对 Greedy 算法的改进.对100个例子进行了计算和分析,结果令人满意.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interdomain routing on the Internet is performed using route preference policies specified independently, and arbitrarily by each Autonomous System in the network. These policies are used in the border gateway protocol (BGP) by each AS when selecting next-hop choices for routes to each destination. Conflicts between policies used by different ASs can lead to routing instabilities that, potentially, cannot be resolved no matter how long BGP is run. The Stable Paths Problem (SPP) is an abstract graph theoretic model of the problem of selecting nexthop routes for a destination. A stable solution to the problem is a set of next-hop choices, one for each AS, that is compatible with the policies of each AS. In a stable solution each AS has selected its best next-hop given that the next-hop choices of all neighbors are fixed. BGP can be viewed as a distributed algorithm for solving SPP. In this report we consider the stable paths problem, as well as a family of restricted variants of the stable paths problem, which we call F stable paths problems. We show that two very simple variants of the stable paths problem are also NP-complete. In addition we show that for networks with a DAG topology, there is an efficient centralized algorithm to solve the stable paths problem, and that BGP always efficiently converges to a stable solution on such networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most real-time scheduling problems are known to be NP-complete. To enable accurate comparison between the schedules of heuristic algorithms and the optimal schedule, we introduce an omniscient oracle. This oracle provides schedules for periodic task sets with harmonic periods and variable resource requirements. Three different job value functions are described and implemented. Each corresponds to a different system goal. The oracle is used to examine the performance of different on-line schedulers under varying loads, including overload. We have compared the oracle against Rate Monotonic Scheduling, Statistical Rate Monotonic Scheduling, and Slack Stealing Job Admission Control Scheduling. Consistently, the oracle provides an upper bound on performance for the metric under consideration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existing type systems for object calculi are based on invariant subtyping. Subtyping invariance is required for soundness of static typing in the presence of method overrides, but it is often in the way of the expressive power of the type system. Flexibility of static typing can be recovered in different ways: in first-order systems, by the adoption of object types with variance annotations, in second-order systems by resorting to Self types. Type inference is known to be P-complete for first-order systems of finite and recursive object types, and NP-complete for a restricted version of Self types. The complexity of type inference for systems with variance annotations is yet unknown. This paper presents a new object type system based on the notion of Split types, a form of object types where every method is assigned two types, namely, an update type and a select type. The subtyping relation that arises for Split types is variant and, as a result, subtyping can be performed both in width and in depth. The new type system generalizes all the existing first-order type systems for objects, including systems based on variance annotations. Interestingly, the additional expressive power does not affect the complexity of the type inference problem, as we show by presenting an O(n^3) inference algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Graph partitioning divides a graph into several pieces by cutting edges. Very effective heuristic partitioning algorithms have been developed which run in real-time, but it is unknown how good the partitions are since the problem is, in general, NP-complete. This paper reports an evolutionary search algorithm for finding benchmark partitions. Distinctive features are the transmission and modification of whole subdomains (the partitioned units) that act as genes, and the use of a multilevel heuristic algorithm to effect the crossover and mutations. Its effectiveness is demonstrated by improvements on previously established benchmarks.