930 resultados para indifference graph
Resumo:
The problem of determining whether a Tanner graph for a linear block code has a stopping set of a given size is shown to be NT-complete.
Resumo:
Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.
Resumo:
This work describes the parallelization of High Resolution flow solver on unstructured meshes, HIFUN-3D, an unstructured data based finite volume solver for 3-D Euler equations. For mesh partitioning, we use METIS, a software based on multilevel graph partitioning. The unstructured graph used for partitioning is associated with weights both on its vertices and edges. The data residing on every processor is split into four layers. Such a novel procedure of handling data helps in maintaining the effectiveness of the serial code. The communication of data across the processors is achieved by explicit message passing using the standard blocking mode feature of Message Passing Interface (MPI). The parallel code is tested on PACE++128 available in CFD Center
Resumo:
The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.
Resumo:
One of the key problems in the design of any incompletely connected multiprocessor system is to appropriately assign the set of tasks in a program to the Processing Elements (PEs) in the system. The task assignment problem has proven difficult both in theory and in practice. This paper presents a simple and efficient heuristic algorithm for assigning program tasks with precedence and communication constraints to the PEs in a Message-based Multiple-bus Multiprocessor System, M3, so that the total execution time for the program is minimized. The algorithm uses a cost function: “Minimum Distance and Parallel Transfer” to minimize the completion time. The effectiveness of the algorithm has been demonstrated by comparing the results with (i) the lower bound on the execution time of a program (task) graph and (ii) a random assignment.
Resumo:
The phyllite deposit of Degana, Rajasthan, containing tungsten values in the form of wolframite, (Fe, MnWO sub 4 ) finely dispersed in the quartz groundmass, has been quantitatively analysed to give 0.063% WO sub 3 , 6.66% Fe sub 2 O sub 3 , 14.30% Al sub 2 O sub 3 and 67.4% SiO sub 2 . The major gangue minerals identified are quartz, iron oxides and mica along with minor amounts of graphite, fluorite and sulphides. The amenability of the ore to gravity concentration, magnetic separation and a combination of the processes has been studied. A combination of tabling on --100 mesh ground ore and dry magnetic separation of the tabled concentrate gave a final concentrate containing 1.834% WO sub 3 with an overall recovery of only 4.6%. The complex mineralogy combined with fine dispersion of very low W values have contributed to the low recoveries and grades. Graph, photomicrographs. 10 ref.--AA
Resumo:
Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].
Resumo:
Splittings of a free group correspond to embedded spheres in the 3-manifold M = # (k) S (2) x S (1). These can be represented in a normal form due to Hatcher. In this paper, we determine the normal form in terms of crossings of partitions of ends corresponding to normal spheres, using a graph of trees representation for normal forms. In particular, we give a constructive proof of a criterion determining when a conjugacy class in pi (2)(M) can be represented by an embedded sphere.
Resumo:
The Reeb graph of a scalar function represents the evolution of the topology of its level sets. In this video, we describe a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Arcs in the Reeb graph are computed in the second step using a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The algorithm is also able to handle non-manifold domains.
Resumo:
An experimental study to ascertain the role of external electron donor in methylene blue sensitized dichromated gelatin (MBDCG) holograms has been carried out. The required volume holographic transmission gratings in MBDCG have been recorded using 633-nm light from a He-Ne laser. Three well-known electron donors, namely, N, N-dimethylformamide (DMF); ethylenediaminetetraacetic acid (EDTA); triethanolamine (TEA), were used in this study. The variation of diffraction efficiency (η) as a function of light exposure (E) and concentration (C) of the electron donor under consideration was chosen as the figure of merit for judging the role of external electron donor in MBDCG holograms. A self-consistent analysis of the experimental results was carried out by recalling the various known facts about the photochemistry and the hologram formation in DSDCG and also DCG. The important findings and conclusions are as follows: (i) Each η vs E graph is a bell-shaped curve and its peak height is influenced in a characteristic manner by the external electron donor used. (ii) High diffraction efficiency/recording speed can be achieved in pure MBDCG holograms. (iii) The diffraction efficiency/recording speed achieved in electron donor sensitized MBDCG holograms did not show any significant improvement at all over that observed in pure MBDCG holograms. (iv) In electron donor sensitized MBDCG holograms, the electron donor used, depending on its type and concentration, appears to promote the process of cross-linking of gelatin molecules in a manner to either retain or deteriorate the refractive-index modulation achieved using pure MBDCG.
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.