1000 resultados para Clique Number


Relevância:

60.00% 60.00%

Publicador:

Resumo:

An axis-parallel b-dimensional box is a Cartesian product R-1 x R-2 x ... x R-b where each R-i (for 1 <= i <= b) is a closed interval of the form [a(i), b(i)] on the real line. The boxicity of any graph G, box(G) is the minimum positive integer b such that G can be represented as the intersection graph of axis-parallel b-dimensional boxes. A b-dimensional cube is a Cartesian product R-1 x R-2 x ... x R-b, where each R-i (for 1 <= i <= b) is a closed interval of the form [a(i), a(i) + 1] on the real line. When the boxes are restricted to be axis-parallel cubes in b-dimension, the minimum dimension b required to represent the graph is called the cubicity of the graph (denoted by cub(G)). In this paper we prove that cub(G) <= inverted right perpendicularlog(2) ninverted left perpendicular box(G), where n is the number of vertices in the graph. We also show that this upper bound is tight.Some immediate consequences of the above result are listed below: 1. Planar graphs have cubicity at most 3inverted right perpendicularlog(2) ninvereted left perpendicular.2. Outer planar graphs have cubicity at most 2inverted right perpendicularlog(2) ninverted left perpendicular.3. Any graph of treewidth tw has cubicity at most (tw + 2) inverted right perpendicularlog(2) ninverted left perpendicular. Thus, chordal graphs have cubicity at most (omega + 1) inverted right erpendicularlog(2) ninverted left perpendicular and circular arc graphs have cubicity at most (2 omega + 1)inverted right perpendicularlog(2) ninverted left perpendicular, where omega is the clique number.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Асен Божилов, Недялко Ненов - Нека G е n-върхов граф и редицата от степените на върховете му е d1, d2, . . . , dn, а V(G) е множеството от върховете на G. Степента на върха v бележим с d(v). Най-малкото естествено число r, за което V(G) има r-разлагане V(G) = V1 ∪ V2 ∪ · · · ∪ Vr, Vi ∩ Vj = ∅, , i 6 = j такова, че d(v) ≤ n − |Vi|, ∀v ∈ Vi, i = 1, 2, . . . , r е означено с ϕ(G). В тази работа доказваме неравенството ...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intention of this note is to motivate the researchers to study Hadwiger's conjecture for circular arc graphs. Let η(G) denote the largest clique minor of a graph G, and let χ(G) denote its chromatic number. Hadwiger's conjecture states that η(G)greater-or-equal, slantedχ(G) and is one of the most important and difficult open problems in graph theory. From the point of view of researchers who are sceptical of the validity of the conjecture, it is interesting to study the conjecture for graph classes where η(G) is guaranteed not to grow too fast with respect to χ(G), since such classes of graphs are indeed a reasonable place to look for possible counterexamples. We show that in any circular arc graph G, η(G)less-than-or-equals, slant2χ(G)−1, and there is a family with equality. So, it makes sense to study Hadwiger's conjecture for this family.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A family of quadratic programming problems whose optimal values are upper bounds on the independence number of a graph is introduced. Among this family, the quadratic programming problem which gives the best upper bound is identified. Also the proof that the upper bound introduced by Hoffman and Lovász for regular graphs is a particular case of this family is given. In addition, some new results characterizing the class of graphs for which the independence number attains the optimal value of the above best upper bound are given. Finally a polynomial-time algorithm for approximating the size of the maximum independent set of an arbitrary graph is described and the computational experiments carried out on 36 DIMACS clique benchmark instances are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação trata do fenômeno da autoria na internet, tendo como foco de estudo do leitor-autor, mais especificamente, de um autor que foi uma vez leitor de uma história original e que assume a função de continuar a história (do ponto em que a deixou seu primeiro criador) ou de modificá-la de forma a agradar a si mesmo e/ou a outros leitores eventuais. Conforme uma vez afirmou Roland Barthes (2004), o escritor sempre será um imitador de escritos anteriores à sua época, mas nunca original. A partir de análises de trechos das histórias, suas notas de autoria e interação com outros leitores através dos comentários, essas criações de fãs-autores (como são chamados nessa pesquisa os autores que se dedicam à escrita desse tipo de história), estuda-se a modificação do conceito de autoria na atualidade, pois muitas vezes o autor perde seu posto ao manter contato com alguma forma de tecnologia da informação – e agora lida com gêneros chamados híbridos: textos que não são puros, que não possuem uma característica apenas para poderem ser definidos pelos manuais comuns. Desta forma é difícil definir o que é, afinal, o autor na era dos gêneros digitais. Num momento em que a cada dia as pessoas do mundo inteiro aumentam seu número de horas navegando, os gêneros digitais representam uma nova relação entre ser humano e instrumento semiótico, com diversos aspectos a serem observados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a new technique referred to as watched subgraphs which improves the performance of BBMC, a leading state of the art exact maximum clique solver (MCP). It is based on watched literals employed by modern SAT solvers for boolean constraint propagation. In efficient SAT algorithms, a list of clauses is kept for each literal (it is said that the clauses watch the literal) so that only those in the list are checked for constraint propagation when a (watched) literal is assigned during search. BBMC encodes vertex sets as bit strings, a bit block representing a subset of vertices (and the corresponding induced subgraph) the size of the CPU register word. The paper proposes to watch two subgraphs of critical sets during MCP search to efficiently compute a number of basic operations. Reported results validate the approach as the size and density of problem instances rise, while achieving comparable performance in the general case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 05C55.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method for estimating the time to colonization of Methicillin-resistant Staphylococcus Aureus (MRSA) patients is developed in this paper. The time to colonization of MRSA is modelled using a Bayesian smoothing approach for the hazard function. There are two prior models discussed in this paper: the first difference prior and the second difference prior. The second difference prior model gives smoother estimates of the hazard functions and, when applied to data from an intensive care unit (ICU), clearly shows increasing hazard up to day 13, then a decreasing hazard. The results clearly demonstrate that the hazard is not constant and provide a useful quantification of the effect of length of stay on the risk of MRSA colonization which provides useful insight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge of particle emission characteristics associated with forest fires and in general, biomass burning, is becoming increasingly important due to the impact of these emissions on human health. Of particular importance is developing a better understanding of the size distribution of particles generated from forest combustion under different environmental conditions, as well as provision of emission factors for different particle size ranges. This study was aimed at quantifying particle emission factors from four types of wood found in South East Queensland forests: Spotted Gum (Corymbia citriodora), Red Gum (Eucalypt tereticornis), Blood Gum (Eucalypt intermedia), and Iron bark (Eucalypt decorticans); under controlled laboratory conditions. The experimental set up included a modified commercial stove connected to a dilution system designed for the conditions of the study. Measurements of particle number size distribution and concentration resulting from the burning of woods with a relatively homogenous moisture content (in the range of 15 to 26 %) and for different rates of burning were performed using a TSI Scanning Mobility Particle Sizer (SMPS) in the size range from 10 to 600 nm and a TSI Dust Trak for PM2.5. The results of the study in terms of the relationship between particle number size distribution and different condition of burning for different species show that particle number emission factors and PM2.5 mass emission factors depend on the type of wood and the burning rate; fast burning or slow burning. The average particle number emission factors for fast burning conditions are in the range of 3.3 x 1015 to 5.7 x 1015 particles/kg, and for PM2.5 are in the range of 139 to 217 mg/kg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurement of submicrometre (< 1.0 m) and ultrafine particles (diameter < 0.1 m) number concentration have attracted attention since the last decade because the potential health impacts associated with exposure to these particles can be more significant than those due to exposure to larger particles. At present, ultrafine particles are not regularly monitored and they are yet to be incorporated into air quality monitoring programs. As a result, very few studies have analysed their long-term and spatial variations in ultrafine particle concentration, and none have been in Australia. To address this gap in scientific knowledge, the aim of this research was to investigate the long-term trends and seasonal variations in particle number concentrations in Brisbane, Australia. Data collected over a five-year period were analysed using weighted regression models. Monthly mean concentrations in the morning (6:00-10:00) and the afternoon (16:00-19:00) were plotted against time in months, using the monthly variance as the weights. During the five-year period, submicrometre and ultrafine particle concentrations increased in the morning by 105.7% and 81.5% respectively whereas in the afternoon there was no significant trend. The morning concentrations were associated with fresh traffic emissions and the afternoon concentrations with the background. The statistical tests applied to the seasonal models, on the other hand, indicated that there was no seasonal component. The spatial variation in size distribution in a large urban area was investigated using particle number size distribution data collected at nine different locations during different campaigns. The size distributions were represented by the modal structures and cumulative size distributions. Particle number peaked at around 30 nm, except at an isolated site dominated by diesel trucks, where the particle number peaked at around 60 nm. It was found that ultrafine particles contributed to 82%-90% of the total particle number. At the sites dominated by petrol vehicles, nanoparticles (< 50 nm) contributed 60%-70% of the total particle number, and at the site dominated by diesel trucks they contributed 50%. Although the sampling campaigns took place during different seasons and were of varying duration these variations did not have an effect on the particle size distributions. The results suggested that the distributions were rather affected by differences in traffic composition and distance to the road. To investigate the occurrence of nucleation events, that is, secondary particle formation from gaseous precursors, particle size distribution data collected over a 13 month period during 5 different campaigns were analysed. The study area was a complex urban environment influenced by anthropogenic and natural sources. The study introduced a new application of time series differencing for the identification of nucleation events. To evaluate the conditions favourable to nucleation, the meteorological conditions and gaseous concentrations prior to and during nucleation events were recorded. Gaseous concentrations did not exhibit a clear pattern of change in concentration. It was also found that nucleation was associated with sea breeze and long-range transport. The implications of this finding are that whilst vehicles are the most important source of ultrafine particles, sea breeze and aged gaseous emissions play a more important role in secondary particle formation in the study area.