976 resultados para Distance convex simple graphs
Resumo:
Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.
Resumo:
According to the List Colouring Conjecture, if G is a multigraph then χ' (G)=χl' (G) . In this thesis, we discuss a relaxed version of this conjecture that every simple graph G is edge-(∆ + 1)-choosable as by Vizing’s Theorem ∆(G) ≤χ' (G)≤∆(G) + 1. We prove that if G is a planar graph without 7-cycles with ∆(G)≠5,6 , or without adjacent 4-cycles with ∆(G)≠5, or with no 3-cycles adjacent to 5-cycles, then G is edge-(∆ + 1)-choosable.
Resumo:
A graphs G is clique irreducible if every clique in G of size at least two,has an edge which does not lie in any other clique of G and is clique reducible if it is not clique irreducible. A graph G is clique vertex irreducible if every clique in G has a vertex which does not lie in any other clique of G and clique vertex reducible if it is not clique vertex irreducible. The clique vertex irreducibility and clique irreducibility of graphs which are non-complete extended p-sums (NEPS) of two graphs are studied. We prove that if G(c) has at least two non-trivial components then G is clique vertex reducible and if it has at least three non-trivial components then G is clique reducible. The cographs and the distance hereditary graphs which are clique vertex irreducible and clique irreducible are also recursively characterized.
Resumo:
An antimedian of a pro le = (x1; x2; : : : ; xk) of vertices of a graph G is a vertex maximizing the sum of the distances to the elements of the pro le. The antimedian function is de ned on the set of all pro les on G and has as output the set of antimedians of a pro le. It is a typical location function for nding a location for an obnoxious facility. The `converse' of the antimedian function is the median function, where the distance sum is minimized. The median function is well studied. For instance it has been characterized axiomatically by three simple axioms on median graphs. The median function behaves nicely on many classes of graphs. In contrast the antimedian function does not have a nice behavior on most classes. So a nice axiomatic characterization may not be expected. In this paper such a characterization is obtained for the two classes of graphs on which the antimedian is well-behaved: paths and hypercubes.
Resumo:
The Majority Strategy for finding medians of a set of clients on a graph can be relaxed in the following way: if we are at v, then we move to a neighbor w if there are at least as many clients closer to w than to v (thus ignoring the clients at equal distance from v and w). The graphs on which this Plurality Strategy always finds the set of all medians are precisely those for which the set of medians induces always a connected subgraph
Resumo:
The median problem is a classical problem in Location Theory: one searches for a location that minimizes the average distance to the sites of the clients. This is for desired facilities as a distribution center for a set of warehouses. More recently, for obnoxious facilities, the antimedian was studied. Here one maximizes the average distance to the clients. In this paper the mixed case is studied. Clients are represented by a profile, which is a sequence of vertices with repetitions allowed. In a signed profile each element is provided with a sign from f+; g. Thus one can take into account whether the client prefers the facility (with a + sign) or rejects it (with a sign). The graphs for which all median sets, or all antimedian sets, are connected are characterized. Various consensus strategies for signed profiles are studied, amongst which Majority, Plurality and Scarcity. Hypercubes are the only graphs on which Majority produces the median set for all signed profiles. Finally, the antimedian sets are found by the Scarcity Strategy on e.g. Hamming graphs, Johnson graphs and halfcubes
Resumo:
Aitchison and Bacon-Shone (1999) considered convex linear combinations of compositions. In other words, they investigated compositions of compositions, where the mixing composition follows a logistic Normal distribution (or a perturbation process) and the compositions being mixed follow a logistic Normal distribution. In this paper, I investigate the extension to situations where the mixing composition varies with a number of dimensions. Examples would be where the mixing proportions vary with time or distance or a combination of the two. Practical situations include a river where the mixing proportions vary along the river, or across a lake and possibly with a time trend. This is illustrated with a dataset similar to that used in the Aitchison and Bacon-Shone paper, which looked at how pollution in a loch depended on the pollution in the three rivers that feed the loch. Here, I explicitly model the variation in the linear combination across the loch, assuming that the mean of the logistic Normal distribution depends on the river flows and relative distance from the source origins
Resumo:
We present algorithms for computing approximate distance functions and shortest paths from a generalized source (point, segment, polygonal chain or polygonal region) on a weighted non-convex polyhedral surface in which obstacles (represented by polygonal chains or polygons) are allowed. We also describe an algorithm for discretizing, by using graphics hardware capabilities, distance functions. Finally, we present algorithms for computing discrete k-order Voronoi diagrams
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
We describe a simple comparative method for determining whether rates of diversification are correlated with continuous traits in species-level phylogenies. This involves comparing traits of species with net speciation rate (number of nodes linking extant species with the root divided by the root to tip evolutionary distance), using a phylogenetically corrected correlation. We use simulations to examine the power of this test. We find that the approach has acceptable power to uncover relationships between speciation and a continuous trait and is robust to background random extinction; however, the power of the approach is reduced when the rate of trait evolution is decreased. The test has low power to relate diversification to traits when extinction rate is correlated with the trait. Clearly, there are inherent limitations in using only data on extant species to infer correlates of extinction; however, this approach is potentially a powerful tool in analyzing correlates of speciation.
Resumo:
Persistent contrails are an important climate impact of aviation which could potentially be reduced by re-routing aircraft to avoid contrailing; however this generally increases both the flight length and its corresponding CO emissions. Here, we provide a simple framework to assess the trade-off between the climate impact of CO emissions and contrails for a single flight, in terms of the absolute global warming potential and absolute global temperature potential metrics for time horizons of 20, 50 and 100 years. We use the framework to illustrate the maximum extra distance (with no altitude changes) that can be added to a flight and still reduce its overall climate impact. Small aircraft can fly up to four times further to avoid contrailing than large aircraft. The results have a strong dependence on the applied metric and time horizon. Applying a conservative estimate of the uncertainty in the contrail radiative forcing and climate efficacy leads to a factor of 20 difference in the maximum extra distance that could be flown to avoid a contrail. The impact of re-routing on other climatically-important aviation emissions could also be considered in this framework.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
2D electrophoresis is a well-known method for protein separation which is extremely useful in the field of proteomics. Each spot in the image represents a protein accumulation and the goal is to perform a differential analysis between pairs of images to study changes in protein content. It is thus necessary to register two images by finding spot correspondences. Although it may seem a simple task, generally, the manual processing of this kind of images is very cumbersome, especially when strong variations between corresponding sets of spots are expected (e.g. strong non-linear deformations and outliers). In order to solve this problem, this paper proposes a new quadratic assignment formulation together with a correspondence estimation algorithm based on graph matching which takes into account the structural information between the detected spots. Each image is represented by a graph and the task is to find a maximum common subgraph. Successful experimental results using real data are presented, including an extensive comparative performance evaluation with ground-truth data. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider the problems of finding the maximum number of vertex-disjoint triangles (VTP) and edge-disjoint triangles (ETP) in a simple graph. Both problems are NP-hard. The algorithm with the best approximation ratio known so far for these problems has ratio 3/2 + epsilon, a result that follows from a more general algorithm for set packing obtained by Hurkens and Schrijver [On the size of systems of sets every t of which have an SDR, with an application to the worst-case ratio of heuristics for packing problems, SIAM J. Discrete Math. 2(1) (1989) 68-72]. We present improvements on the approximation ratio for restricted cases of VTP and ETP that are known to be APX-hard: we give an approximation algorithm for VTP on graphs with maximum degree 4 with ratio slightly less than 1.2, and for ETP on graphs with maximum degree 5 with ratio 4/3. We also present an exact linear-time algorithm for VTP on the class of indifference graphs. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Using the Luthar-Passi method, we investigate the classical Zassenhaus conjecture for the normalized unit group of the integral group ring of the Suzuki sporadic simple group Suz. As a consequence, for this group we confirm the Kimmerle`s conjecture on prime graphs.