990 resultados para NP-dur


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the problems of computing a minimum co-cycle basis and a minimum weakly fundamental co-cycle basis of a directed graph G. A co-cycle in G corresponds to a vertex partition (S,V ∖ S) and a { − 1,0,1} edge incidence vector is associated with each co-cycle. The vector space over ℚ generated by these vectors is the co-cycle space of G. Alternately, the co-cycle space is the orthogonal complement of the cycle space of G. The minimum co-cycle basis problem asks for a set of co-cycles that span the co-cycle space of G and whose sum of weights is minimum. Weakly fundamental co-cycle bases are a special class of co-cycle bases, these form a natural superclass of strictly fundamental co-cycle bases and it is known that computing a minimum weight strictly fundamental co-cycle basis is NP-hard. We show that the co-cycle basis corresponding to the cuts of a Gomory-Hu tree of the underlying undirected graph of G is a minimum co-cycle basis of G and it is also weakly fundamental.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results of the present investigation reveal that the presence of anions in the reacting medium greatly modify the reactions between soil and solution P. Associating anions reduce considerably the retention of phosphate in soils. Citrate, tartrate, and silicate are found to be superior to arsenate, oxalate, and fluoride in reducing phosphate retention in soil. The performance of associating anions depends on the pH and P concentration of the reacting medium. The nature and properties of soil also play a highly significant role on the effectiveness of associating anions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A suitable method for the selective isolation of catechol-cleaving yeasts from coir rets has been worked out. The yeast strains, all belonging toDebaryomyces hansenii, were found to demand biotin as an essential vitamin. The organism has the ability to grow on catechol, phenol and some related compounds as sole source of carbon. It tolerates 0.4% catechol and 0.26% phenol. Evidence was obtained that the catechol-cleaving enzyme of the isolates is a pyrocatechase. Some properties of the cell-free catechol oxygenase are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. 1. A simple method has been devised for the estimation of phloroglucinol based on the formation of an intense colored compound with a modified Ehrlich reagent in the presence of trichloroacetic acid. Some factors affecting the formation of color have been studied. 2. 2. Careful regulation of trichloroacetic acid content in the system permits its estimation in 1–15 μg in the micro range and in 10–50 μg in the macro range. Phloroglucinol lends itself to ready separation by paper chromatography and estimation after clution from paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we introduce a nonlinear detector based on the phenomenon of suprathreshold stochastic resonance (SSR). We first present a model (an array of 1-bit quantizers) that demonstrates the SSR phenomenon. We then use this as a pre-processor to the conventional matched filter. We employ the Neyman-Pearson(NP) detection strategy and compare the performances of the matched filter, the SSR-based detector and the optimal detector. Although the proposed detector is non-optimal, for non-Gaussian noises with heavy tails (leptokurtic) it shows better performance than the matched filter. In situations where the noise is known to be leptokurtic without the availability of the exact knowledge of its distribution, the proposed detector turns out to be a better choice than the matched filter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of assigning customers to satellite channels is considered. Finding an optimal allocation of customers to satellite channels is a difficult combinatorial optimization problem and is shown to be NP-complete in an earlier study. We propose a genetic algorithm (GA) approach to search for the best/optimal assignment of customers to satellite channels. Various issues related to genetic algorithms such as solution representation, selection methods, genetic operators and repair of invalid solutions are presented. A comparison of this approach with the standard optimization method is presented to show the advantages of this approach in terms of computation time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ihmisen virallinen olemassaolo ei pääty fysiologiseen kuolemaan. Se päättyy institutionaaliseen kielenkäyttöön: kuolintodistuksen kirjoittamiseen tai kuolleeksi julistamiseen. Valtaosa saa elämänsä viralliseksi päätökseksi lääkärin kirjoittaman kuolintodistuksen. Se on oikeuslääkinnällinen asiakirja, jolla saadaan aikaan uusi institutionaalinen asiaintila päättynyt elämä. Tarkastelen tutkielmassani 40 kuolintodistuksen kielenkäyttöä. Kartoitan, millaisia ovat niiden keskeiset kielenpiirteet. Osoitan myös, kuinka lääkäri esittää kuolintodistuksessa kertomuksen, jossa kieliopillisesti menneeseen aikaan kiinnittymätön kerronta toimii tekona, joka tekee tekstistä antinarratiivista ja avaa tien metatason merkityksenantoon. Tutkielmani keskeinen viitekehys on performatiivisuuden teoria. Esitän, ettei kuolintodistuksessa esitettyä lääketieteellistä kuolemankulkua sellaisenaan ole olemassa lääkärin kielenkäytön ja sen puitteet sanelevan institutionaalisen toiminnan ulkopuolella. Potilas muodostuu lääketieteelliseksi tapaukseksi kielenkäytössä, jolla tapaus esitetään. Siksi kuolintodistuksen kielenaineksetkin ovat monikasvoisesti performatiivisia, esittämänsä asiaintilan synnyttävää kielenkäyttöä. Kuolintodistuksille on tyypillistä itsenäisiin partisiippeihin ja verbittömään konstruointiin perustuva kerronta. Finiittiverbeistä käytetään preteritiä ja dramaattista preesensiä. Aktiivimuotoisella NUT-partisiipilla raportoidaan potilaan tilasta, passiivimuotoisella TU-partisiipilla sairaanhoitoinstituution toiminnasta. Koska liittotempukset eivät kuulu aineistoni keskeisiin kielenpiirteisiin, partisiippikerrontaa ei ole perusteltua tarkastella apuverbin ellipsioletuksesta käsin. Verbittömät rakenteet muistuttavat syntaktisia lausetyyppejä tai ovat vapaita NP:itä. Jos verbittömässä rakenteessa käytetään teonnimeä tai lääketieteen termiä, jolla on teonnimen merkitys, rakenne on funktioltaan lähellä dramaattista preesensiä: kertoo toiminnasta tai tapahtumasta eikä kiinnity kieliopillisesti menneeseen aikaan. Analyyseissa tulee esiin, että dramaattinen preesens ja verbitön konstruointi toimivat osana samaa kielellistä strategiaa. Molempia myös käytetään dramaattisen preesensin tyypillisessä esiintymisyhteydessä, kertomuksen käänteissä ja huipentumassa. Tutkielma tarjoaa tarkan kuvan etenkin dramaattisen preesensin funktioista kuolintodistuksessa. Esiintymisympäristönsä kokonaiskuvan mukaan se asettaa lääkärin toimijan tai tarkkailijan asemaan. Yksipersoonaisten passiivien ja eksplisiittisten performatiivien yhteydessä se kertoo yhteistyöhön perustuvasta institutionaalisesta toiminnasta. Preesensmuotoiset muuttumisjohdokset puolestaan siirtävät lääkärin toimijan roolista tarkkailijan asemaan ja esittävät potilaan tilanteen epäagentiivisena patienttina. Jos muuttumisjohdoksen yhteydessä on käytetty odotuksenvastaisen suhteen merkitsintä, lääkäri osoittaa yllättyneisyyttä tavasta, jolla kuolema etenee. Näin hän nostaa esiin kuoleman vallan ja yhden toimenkuvansa realiteeteista: hoitoinstituutio on sitoutunut potilaan auttamiseen, mutta vääjäämättömissä tilanteissa voi lääkärikin olla vain silminnäkijänä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.