993 resultados para Approximation algorithms


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we give a generalization of a result by Borkar and Meyn (2000) 1], on the stability and convergence of synchronous-update stochastic approximation algorithms, to the case of asynchronous stochastic approximations with delays. We then describe an interesting application of the result to asynchronous distributed temporal difference (TD) learning with function approximation and delays. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For a fixed positive integer k, a k-tuple total dominating set of a graph G = (V. E) is a subset T D-k of V such that every vertex in V is adjacent to at least k vertices of T Dk. In minimum k-tuple total dominating set problem (MIN k-TUPLE TOTAL DOM SET), it is required to find a k-tuple total dominating set of minimum cardinality and DECIDE MIN k-TUPLE TOTAL DOM SET is the decision version of MIN k-TUPLE TOTAL DOM SET problem. In this paper, we show that DECIDE MIN k-TUPLE TOTAL DOM SET is NP-complete for split graphs, doubly chordal graphs and bipartite graphs. For chordal bipartite graphs, we show that MIN k-TUPLE TOTAL DOM SET can be solved in polynomial time. We also propose some hardness results and approximation algorithms for MIN k-TUPLE TOTAL DOM SET problem. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rainbow connection number, rc(G), of a connected graph G is the minimum number of colors needed to color its edges so that every pair of vertices is connected by at least one path in which no two edges are colored the same (note that the coloring need not be proper). In this paper we study the rainbow connection number with respect to three important graph product operations (namely the Cartesian product, the lexicographic product and the strong product) and the operation of taking the power of a graph. In this direction, we show that if G is a graph obtained by applying any of the operations mentioned above on non-trivial graphs, then rc(G) a parts per thousand currency sign 2r(G) + c, where r(G) denotes the radius of G and . In general the rainbow connection number of a bridgeless graph can be as high as the square of its radius 1]. This is an attempt to identify some graph classes which have rainbow connection number very close to the obvious lower bound of diameter (and thus the radius). The bounds reported are tight up to additive constants. The proofs are constructive and hence yield polynomial time -factor approximation algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wydział Matematyki i Informatyki: Zakład Matematyki Dyskretnej

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Large probabilistic graphs arise in various domains spanning from social networks to biological and communication networks. An important query in these graphs is the k nearest-neighbor query, which involves finding and reporting the k closest nodes to a specific node. This query assumes the existence of a measure of the "proximity" or the "distance" between any two nodes in the graph. To that end, we propose various novel distance functions that extend well known notions of classical graph theory, such as shortest paths and random walks. We argue that many meaningful distance functions are computationally intractable to compute exactly. Thus, in order to process nearest-neighbor queries, we resort to Monte Carlo sampling and exploit novel graph-transformation ideas and pruning opportunities. In our extensive experimental analysis, we explore the trade-offs of our approximation algorithms and demonstrate that they scale well on real-world probabilistic graphs with tens of millions of edges.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper considers a variant of the classical problem of minimizing makespan in a two-machine flow shop. In this variant, each job has three operations, where the first operation must be performed on the first machine, the second operation can be performed on either machine but cannot be preempted, and the third operation must be performed on the second machine. The NP-hard nature of the problem motivates the design and analysis of approximation algorithms. It is shown that a schedule in which the operations are sequenced arbitrarily, but without inserted machine idle time, has a worst-case performance ratio of 2. Also, an algorithm that constructs four schedules and selects the best is shown to have a worst-case performance ratio of 3/2. A polynomial time approximation scheme (PTAS) is also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider the problem of providing flexibility to solutions of two-machine shop scheduling problems. We use the concept of group-scheduling to characterize a whole set of schedules so as to provide more choice to the decision-maker at any decision point. A group-schedule is a sequence of groups of permutable operations defined on each machine where each group is such that any permutation of the operations inside the group leads to a feasible schedule. Flexibility of a solution and its makespan are often conflicting, thus we search for a compromise between a low number of groups and a small value of makespan. We resolve the complexity status of the relevant problems for the two-machine flow shop, job shop and open shop. A number of approximation algorithms are developed and their worst-case performance is analyzed. For the flow shop, an effective heuristic algorithm is proposed and the results of computational experiments are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os problemas de visibilidade têm diversas aplicações a situações reais. Entre os mais conhecidos, e exaustivamente estudados, estão os que envolvem os conceitos de vigilância e ocultação em estruturas geométricas (problemas de vigilância e ocultação). Neste trabalho são estudados problemas de visibilidade em estruturas geométricas conhecidas como polígonos, uma vez que estes podem representar, de forma apropriada, muitos dos objectos reais e são de fácil manipulação computacional. O objectivo dos problemas de vigilância é a determinação do número mínimo de posições para a colocação de dispositivos num dado polígono, de modo a que estes dispositivos consigam “ver” a totalidade do polígono. Por outro lado, o objectivo dos problemas de ocultação é a determinação do número máximo de posições num dado polígono, de modo a que quaisquer duas posições não se consigam “ver”. Infelizmente, a maior parte dos problemas de visibilidade em polígonos são NP-difíceis, o que dá origem a duas linhas de investigação: o desenvolvimento de algoritmos que estabelecem soluções aproximadas e a determinação de soluções exactas para classes especiais de polígonos. Atendendo a estas duas linhas de investigação, o trabalho é dividido em duas partes. Na primeira parte são propostos algoritmos aproximados, baseados essencialmente em metaheurísticas e metaheurísticas híbridas, para resolver alguns problemas de visibilidade, tanto em polígonos arbitrários como ortogonais. Os problemas estudados são os seguintes: “Maximum Hidden Vertex Set problem”, “Minimum Vertex Guard Set problem”, “Minimum Vertex Floodlight Set problem” e “Minimum Vertex k-Modem Set problem”. São também desenvolvidos métodos que permitem determinar a razão de aproximação dos algoritmos propostos. Para cada problema são implementados os algoritmos apresentados e é realizado um estudo estatístico para estabelecer qual o algoritmo que obtém as melhores soluções num tempo razoável. Este estudo permite concluir que as metaheurísticas híbridas são, em geral, as melhores estratégias para resolver os problemas de visibilidade estudados. Na segunda parte desta dissertação são abordados os problemas “Minimum Vertex Guard Set”, “Maximum Hidden Set” e “Maximum Hidden Vertex Set”, onde são identificadas e estudadas algumas classes de polígonos para as quais são determinadas soluções exactas e/ou limites combinatórios.