949 resultados para Polynomial algebra


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a simple proof of Toda′s result (Toda (1989), in "Proceedings, 30th Annual IEEE Symposium on Foundations of Computer Science," pp. 514-519), which states that circled plus P is hard for the Polynomial Hierarchy under randomized reductions. Our approach is circuit-based in the sense that we start with uniform circuit definitions of the Polynomial Hierarchy and apply the Valiant-Vazirani lemma on these circuits (Valiant and Vazirani (1986), Thoeret. Comput. Sci.47, 85-93).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of finding a set of constraints of minimum cardinality which when relaxed in an infeasible linear program, make it feasible. We show the problem is NP-hard even when the constraint matrix is totally unimodular and prove polynomial-time solvability when the constraint matrix and the right-hand-side together form a totally unimodular matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pappret conceptualizes parsning med Constraint Grammar på ett nytt sätt som en process med två viktiga representationer. En representation innehåller lokala tvetydighet och den andra sammanfattar egenskaperna hos den lokala tvetydighet klasser. Båda representationer manipuleras med ren finite-state metoder, men deras samtrafik är en ad hoc -tillämpning av rationella potensserier. Den nya tolkningen av parsning systemet har flera praktiska fördelar, bland annat det inåt deterministiska sättet att beräkna, representera och räkna om alla potentiella tillämpningar av reglerna i meningen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of minimizing the total completion time on a single batch processing machine. The set of jobs to be scheduled can be partitioned into a number of families, where all jobs in the same family have the same processing time. The machine can process at most B jobs simultaneously as a batch, and the processing time of a batch is equal to the processing time of the longest job in the batch. We analyze that properties of an optimal schedule and develop a dynamic programming algorithm of polynomial time complexity when the number of job families is fixed. The research is motivated by the problem of scheduling burn-in ovens in the semiconductor industry

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper addresses the problem of minimizing the number of columns with superdiagonal nonzeroes (viz., spiked columns) in a square, nonsingular linear system of equations which is to be solved by Gaussian elimination. The exact focus is on a class of min-spike heuristics in which the rows and columns of the coefficient matrix are first permuted to block lower-triangular form. Subsequently, the number of spiked columns in each irreducible block and their heights above the diagonal are minimized heuristically. We show that ifevery column in an irreducible block has exactly two nonzeroes, i.e., is a doubleton, then there is exactly one spiked column. Further, if there is at least one non-doubleton column, there isalways an optimal permutation of rows and columns under whichnone of the doubleton columns are spiked. An analysis of a few benchmark linear programs suggests that singleton and doubleton columns can abound in practice. Hence, it appears that the results of this paper can be practically useful. In the rest of the paper, we develop a polynomial-time min-spike heuristic based on the above results and on a graph-theoretic interpretation of doubleton columns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a family of 3D versions of a smooth finite element method (Sunilkumar and Roy 2010), wherein the globally smooth shape functions are derivable through the condition of polynomial reproduction with the tetrahedral B-splines (DMS-splines) or tensor-product forms of triangular B-splines and ID NURBS bases acting as the kernel functions. While the domain decomposition is accomplished through tetrahedral or triangular prism elements, an additional requirement here is an appropriate generation of knotclouds around the element vertices or corners. The possibility of sensitive dependence of numerical solutions to the placements of knotclouds is largely arrested by enforcing the condition of polynomial reproduction whilst deriving the shape functions. Nevertheless, given the higher complexity in forming the knotclouds for tetrahedral elements especially when higher demand is placed on the order of continuity of the shape functions across inter-element boundaries, we presently emphasize an exploration of the triangular prism based formulation in the context of several benchmark problems of interest in linear solid mechanics. In the absence of a more rigorous study on the convergence analyses, the numerical exercise, reported herein, helps establish the method as one of remarkable accuracy and robust performance against numerical ill-conditioning (such as locking of different kinds) vis-a-vis the conventional FEM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study t-analogs of string functions for integrable highest weight representations of the affine Kac-Moody algebra A(1)((1)). We obtain closed form formulas for certain t-string functions of levels 2 and 4. As corollaries, we obtain explicit identities for the corresponding affine Hall-Littlewood functions, as well as higher level generalizations of Cherednik's Macdonald and Macdonald-Mehta constant term identities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The minimum distance of linear block codes is one of the important parameter that indicates the error performance of the code. When the code rate is less than 1/2, efficient algorithms are available for finding minimum distance using the concept of information sets. When the code rate is greater than 1/2, only one information set is available and efficiency suffers. In this paper, we investigate and propose a novel algorithm to find the minimum distance of linear block codes with the code rate greater than 1/2. We propose to reverse the roles of information set and parity set to get virtually another information set to improve the efficiency. This method is 67.7 times faster than the minimum distance algorithm implemented in MAGMA Computational Algebra System for a (80, 45) linear block code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed a theory for an electrochemical way of measuring the statistical properties of a nonfractally rough electrode. We obtained the expression for the current transient on a rough electrode which shows three times regions: short and long time limits and the transition region between them. The expressions for these time ranges are exploited to extract morphological information about the surface roughness. In the short and long time regimes, we extract information regarding various morphological features like the roughness factor, average roughness, curvature, correlation length, dimensionality of roughness, and polynomial approximation for the correlation function. The formulas for the surface structure factors (the measure of surface roughness) of rough surfaces in terms of measured reversible and diffusion-limited current transients are also obtained. Finally, we explore the feasibility of making such measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of matching people to items, where each person ranks a subset of items in an order of preference, possibly involving ties. There are several notions of optimality about how to best match a person to an item; in particular, popularity is a natural and appealing notion of optimality. A matching M* is popular if there is no matching M such that the number of people who prefer M to M* exceeds the number who prefer M* to M. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not admit popular matchings. This motivates the following extension of the popular matchings problem: Given a graph G = (A U 3, E) where A is the set of people and 2 is the set of items, and a list < c(1),...., c(vertical bar B vertical bar)> denoting upper bounds on the number of copies of each item, does there exist < x(1),...., x(vertical bar B vertical bar)> such that for each i, having x(i) copies of the i-th item, where 1 <= xi <= c(i), enables the resulting graph to admit a popular matching? In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c(i) is 1 or 2. We show a polynomial time algorithm for a variant of the above problem where the total increase in copies is bounded by an integer k. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For an n(t) transmit, n(r) receive antenna system (n(t) x nr system), a full-rate space time block code (STBC) transmits min(n(t), n(r)) complex symbols per channel use. In this paper, a scheme to obtain a full-rate STBC for 4 transmit antennas and any n(r), with reduced ML-decoding complexity is presented. The weight matrices of the proposed STBC are obtained from the unitary matrix representations of a Clifford Algebra. By puncturing the symbols of the STBC, full rate designs can be obtained for n(r) < 4. For any value of n(r), the proposed design offers the least ML-decoding complexity among known codes. The proposed design is comparable in error performance to the well known Perfect code for 4 transmit antennas while offering lower ML-decoding complexity. Further, when n(r) < 4, the proposed design has higher ergodic capacity than the punctured Perfect code. Simulation results which corroborate these claims are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.