950 resultados para Cayley graphs
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
We present a local algorithm (constant-time distributed algorithm) for finding a 3-approximate vertex cover in bounded-degree graphs. The algorithm is deterministic, and no auxiliary information besides port numbering is required. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
A new analytical model has been suggested for the hysteretic behaviour of beams. The model can be directly used in a response analysis without bothering to locate the precise point where the unloading commences. The model can efficiently simulate several types of realistic softening hysteretic loops. This is demonstrated by computing the response of cantilever beams under sinusoidal and random loadings. Results are presented in the form of graphs for maximum deflection, bending moment and shear
Resumo:
Modelling of city traffic involves capturing of all the dynamics that exist in real-time traffic. Probabilistic models and queuing theory have been used for mathematical representation of the traffic system. This paper proposes the concept of modelling the traffic system using bond graphs wherein traffic flow is based on energy conservation. The proposed modelling approach uses switched junctions to model complex traffic networks. This paper presents the modelling, simulation and experimental validation aspects.
Resumo:
This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).
Resumo:
We study the following problem: given a geometric graph G and an integer k, determine if G has a planar spanning subgraph (with the original embedding and straight-line edges) such that all nodes have degree at least k. If G is a unit disk graph, the problem is trivial to solve for k = 1. We show that even the slightest deviation from the trivial case (e.g., quasi unit disk graphs or k = 1) leads to NP-hard problems.
Resumo:
We introduce a variation density function that profiles the relationship between multiple scalar fields over isosurfaces of a given scalar field. This profile serves as a valuable tool for multifield data exploration because it provides the user with cues to identify interesting isovalues of scalar fields. Existing isosurface-based techniques for scalar data exploration like Reeb graphs, contour spectra, isosurface statistics, etc., study a scalar field in isolation. We argue that the identification of interesting isovalues in a multifield data set should necessarily be based on the interaction between the different fields. We demonstrate the effectiveness of our approach by applying it to explore data from a wide variety of applications.
Resumo:
The boxicity of a graph G, denoted as boxi(G), is defined as the minimum integer t such that G is an intersection graph of axis-parallel t-dimensional boxes. A graph G is a k-leaf power if there exists a tree T such that the leaves of the tree correspond to the vertices of G and two vertices in G are adjacent if and only if their corresponding leaves in T are at a distance of at most k. Leaf powers are used in the construction of phylogenetic trees in evolutionary biology and have been studied in many recent papers. We show that for a k-leaf power G, boxi(G) a parts per thousand currency sign k-1. We also show the tightness of this bound by constructing a k-leaf power with boxicity equal to k-1. This result implies that there exist strongly chordal graphs with arbitrarily high boxicity which is somewhat counterintuitive.
Resumo:
A generalization of Nash-Williams′ lemma is proved for the Structure of m-uniform null (m − k)-designs. It is then applied to various graph reconstruction problems. A short combinatorial proof of the edge reconstructibility of digraphs having regular underlying undirected graphs (e.g., tournaments) is given. A type of Nash-Williams′ lemma is conjectured for the vertex reconstruction problem.
Resumo:
Theoretical analysis of internal frequency doubling in actively mode locked broadband solid state lasers is presented. The analysis is used to study the dependence of mode locked pulsewidth on the second harmonic conversion efficiency, the modulation depth, and the tuning element bandwidth in an AM mode locked Ti: sapphire laser. The results are presented in the form of graphs.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.
Resumo:
A boundary layer solution for the conjugate forced convection flow of an electrically conducting fluid over a semi-infinite flat plate in the presence of a transverse magnetic field is presented. The governing nonsimilar partial differential equations are solved numerically using the Keller box method. Values of the temperature profiles of the plate are obtained for various values of the parameters entering the problem and are given in a table and shown on graphs.
Resumo:
Tutte (1979) proved that the disconnected spanning subgraphs of a graph can be reconstructed from its vertex deck. This result is used to prove that if we can reconstruct a set of connected graphs from the shuffled edge deck (SED) then the vertex reconstruction conjecture is true. It is proved that a set of connected graphs can be reconstructed from the SED when all the graphs in the set are claw-free or all are P-4-free. Such a problem is also solved for a large subclass of the class of chordal graphs. This subclass contains maximal outerplanar graphs. Finally, two new conjectures, which imply the edge reconstruction conjecture, are presented. Conjecture 1 demands a construction of a stronger k-edge hypomorphism (to be defined later) from the edge hypomorphism. It is well known that the Nash-Williams' theorem applies to a variety of structures. To prove Conjecture 2, we need to incorporate more graph theoretic information in the Nash-Williams' theorem.
Resumo:
Discrete vortex simulations of the mixing layer carried out in the past have usually involved large induced velocity fluctuations, and thus demanded rather long time-averaging to obtain satisfactory values of Reynolds stresses and third-order moments. This difficulty has been traced here, in part, to the use of discrete vortices to model what in actuality are continuous vortex sheets. We propose here a novel two-dimensional vortex sheet technique for computing mixing layer flow in the limit of infinite Reynolds number. The method divides the vortex sheet into constant-strength linear elements, whose motions are computed using the Biot-Savart law. The downstream far-field is modelled by a steady vorticity distribution derived by application of conical similarity from the solution obtained in a finite computational domain. The boundary condition on the splitter plate is satisfied rigorously using a doublet sheet. The computed large-scale roll-up of the vortex sheet is qualitatively similar to experimentally obtained shadow-graphs of the plane turbulent mixing layer. The mean streamwise velocity profile and the growth rate agree well with experimental data. The presently computed Reynolds stresses and third-order moments are comparable with experimental and previous vortex-dynamical results, without using any external parameter (such as the vortex core-size) of the kind often used in the latter. The computed autocorrelations are qualitatively similar to experimental results along the top and bottom edges of the mixing layer, and show a well-defined periodicity along the centreline. The accuracy of the present computation is independently established by demonstrating negligibly small changes in the five invariants (including the Hamiltonian) in vortex dynamics.