177 resultados para Graph generators
Resumo:
In this paper, we present numerical evidence that supports the notion of minimization in the sequence space of proteins for a target conformation. We use the conformations of the real proteins in the Protein Data Bank (PDB) and present computationally efficient methods to identify the sequences with minimum energy. We use edge-weighted connectivity graph for ranking the residue sites with reduced amino acid alphabet and then use continuous optimization to obtain the energy-minimizing sequences. Our methods enable the computation of a lower bound as well as a tight upper bound for the energy of a given conformation. We validate our results by using three different inter-residue energy matrices for five proteins from protein data bank (PDB), and by comparing our energy-minimizing sequences with 80 million diverse sequences that are generated based on different considerations in each case. When we submitted some of our chosen energy-minimizing sequences to Basic Local Alignment Search Tool (BLAST), we obtained some sequences from non-redundant protein sequence database that are similar to ours with an E-value of the order of 10(-7). In summary, we conclude that proteins show a trend towards minimizing energy in the sequence space but do not seem to adopt the global energy-minimizing sequence. The reason for this could be either that the existing energy matrices are not able to accurately represent the inter-residue interactions in the context of the protein environment or that Nature does not push the optimization in the sequence space, once it is able to perform the function.
Resumo:
An axis-parallel k-dimensional box is a Cartesian product R-1 x R-2 x...x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a left perpendicular1 + 1/c log n right perpendicular(d-1) approximation ratio for any constant c >= 1 when d >= 2. In most cases, the first step usually is computing a low dimensional box representation of the given graph. Deciding whether the boxicity of a graph is at most 2 itself is NP-hard. We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in left perpendicular(Delta + 2) ln nright perpendicular dimensions, where Delta is the maximum degree of G. This algorithm implies that box(G) <= left perpendicular(Delta + 2) ln nright perpendicular for any graph G. Our bound is tight up to a factor of ln n. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree Delta, we show that for almost all graphs on n vertices, their boxicity is O(d(av) ln n) where d(av) is the average degree.
Resumo:
The main objective of on-line dynamic security assessment is to take preventive action if required or decide remedial action if a contingency actually occurs. Stability limits are obtained for different contingencies. The mode of instability is one of the outputs of dynamic security analysis. When a power system becomes unstable, it splits initially into two groups of generators, and there is a unique cutset in the transmission network known as critical cutset across which the angles become unbounded. The knowledge of critical cutset is additional information obtained from dynamic security assessment, which can be used for initiating preventive control actions, deciding emergency control actions, and adaptive out-of-step relaying. In this article, an analytical technique for the fast prediction of the critical cutset by system simulation for a short duration is presented. Case studies on the New England ten-generator system are presented. The article also suggests the applications of the identification of critical cutsets.
Resumo:
An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and it is denoted by a′(G). From a result of Burnstein it follows that all subcubic graphs are acyclically edge colorable using five colors. This result is tight since there are 3-regular graphs which require five colors. In this paper we prove that any non-regular connected graph of maximum degree 3 is acyclically edge colorable using at most four colors. This result is tight since all edge maximal non-regular connected graphs of maximum degree 3 require four colors.
Resumo:
The intention of this note is to motivate the researchers to study Hadwiger's conjecture for circular arc graphs. Let η(G) denote the largest clique minor of a graph G, and let χ(G) denote its chromatic number. Hadwiger's conjecture states that η(G)greater-or-equal, slantedχ(G) and is one of the most important and difficult open problems in graph theory. From the point of view of researchers who are sceptical of the validity of the conjecture, it is interesting to study the conjecture for graph classes where η(G) is guaranteed not to grow too fast with respect to χ(G), since such classes of graphs are indeed a reasonable place to look for possible counterexamples. We show that in any circular arc graph G, η(G)less-than-or-equals, slant2χ(G)−1, and there is a family with equality. So, it makes sense to study Hadwiger's conjecture for this family.
Resumo:
A d-dimensional box is a Cartesian product of d closed intervals on the real line. The boxicity of a graph is the minimum dimension d such that it is representable as the intersection graph of d-dimensional boxes. We give a short constructive proof that every graph with maximum degree D has boxicity at most 2D2. We also conjecture that the best upper bound is linear in D.
Resumo:
A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.
Resumo:
An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Suclakov and Zaks (and earlier by Fiamcik) that a'(G) <= Delta+2, where Delta = Delta(G) denotes the maximum degree of the graph. Alon et al. also raised the question whether the complete graphs of even order are the only regular graphs which require Delta+2 colors to be acyclically edge colored. In this article, using a simple counting argument we observe not only that this is not true, but in fact all d-regular graphs with 2n vertices and d>n, requires at least d+2 colors. We also show that a'(K-n,K-n) >= n+2, when n is odd using a more non-trivial argument. (Here K-n,K-n denotes the complete bipartite graph with n vertices on each side.) This lower bound for Kn,n can be shown to be tight for some families of complete bipartite graphs and for small values of n. We also infer that for every d, n such that d >= 5, n >= 2d+3 and dn even, there exist d-regular graphs which require at least d+2-colors to be acyclically edge colored. (C) 2009 Wiley Periodicals, Inc. J Graph Theory 63: 226-230, 2010.
Resumo:
The Hadwiger number eta(G) of a graph G is the largest integer n for which the complete graph K-n on n vertices is a minor of G. Hadwiger conjectured that for every graph G, eta(G) >= chi(G), where chi(G) is the chromatic number of G. In this paper, we study the Hadwiger number of the Cartesian product G square H of graphs. As the main result of this paper, we prove that eta(G(1) square G(2)) >= h root 1 (1 - o(1)) for any two graphs G(1) and G(2) with eta(G(1)) = h and eta(G(2)) = l. We show that the above lower bound is asymptotically best possible when h >= l. This asymptotically settles a question of Z. Miller (1978). As consequences of our main result, we show the following: 1. Let G be a connected graph. Let G = G(1) square G(2) square ... square G(k) be the ( unique) prime factorization of G. Then G satisfies Hadwiger's conjecture if k >= 2 log log chi(G) + c', where c' is a constant. This improves the 2 log chi(G) + 3 bound in [2] 2. Let G(1) and G(2) be two graphs such that chi(G1) >= chi(G2) >= clog(1.5)(chi(G(1))), where c is a constant. Then G1 square G2 satisfies Hadwiger's conjecture. 3. Hadwiger's conjecture is true for G(d) (Cartesian product of G taken d times) for every graph G and every d >= 2. This settles a question by Chandran and Sivadasan [2]. ( They had shown that the Hadiwger's conjecture is true for G(d) if d >= 3).
Resumo:
Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) = min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) = min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs.
Resumo:
In this paper we consider the problems of computing a minimum co-cycle basis and a minimum weakly fundamental co-cycle basis of a directed graph G. A co-cycle in G corresponds to a vertex partition (S,V ∖ S) and a { − 1,0,1} edge incidence vector is associated with each co-cycle. The vector space over ℚ generated by these vectors is the co-cycle space of G. Alternately, the co-cycle space is the orthogonal complement of the cycle space of G. The minimum co-cycle basis problem asks for a set of co-cycles that span the co-cycle space of G and whose sum of weights is minimum. Weakly fundamental co-cycle bases are a special class of co-cycle bases, these form a natural superclass of strictly fundamental co-cycle bases and it is known that computing a minimum weight strictly fundamental co-cycle basis is NP-hard. We show that the co-cycle basis corresponding to the cuts of a Gomory-Hu tree of the underlying undirected graph of G is a minimum co-cycle basis of G and it is also weakly fundamental.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.
Resumo:
This study views each protein structure as a network of noncovalent connections between amino acid side chains. Each amino acid in a protein structure is a node, and the strength of the noncovalent interactions between two amino acids is evaluated for edge determination. The protein structure graphs (PSGs) for 232 proteins have been constructed as a function of the cutoff of the amino acid interaction strength at a few carefully chosen values. Analysis of such PSGs constructed on the basis of edge weights has shown the following: 1), The PSGs exhibit a complex topological network behavior, which is dependent on the interaction cutoff chosen for PSG construction. 2), A transition is observed at a critical interaction cutoff, in all the proteins, as monitored by the size of the largest cluster (giant component) in the graph. Amazingly, this transition occurs within a narrow range of interaction cutoff for all the proteins, irrespective of the size or the fold topology. And 3), the amino acid preferences to be highly connected (hub frequency) have been evaluated as a function of the interaction cutoff. We observe that the aromatic residues along with arginine, histidine, and methionine act as strong hubs at high interaction cutoffs, whereas the hydrophobic leucine and isoleucine residues get added to these hubs at low interaction cutoffs, forming weak hubs. The hubs identified are found to play a role in bringing together different secondary structural elements in the tertiary structure of the proteins. They are also found to contribute to the additional stability of the thermophilic proteins when compared to their mesophilic counterparts and hence could be crucial for the folding and stability of the unique three-dimensional structure of proteins. Based on these results, we also predict a few residues in the thermophilic and mesophilic proteins that can be mutated to alter their thermal stability.