15 resultados para arithmetic progressions in sumsets

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Algorithms are described for the basic arithmetic operations and square rooting in a negative base. A new operation called polarization that reverses the sign of a number facilitates subtraction, using addition. Some special features of the negative-base arithmetic are also mentioned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using circular linear progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show here a 2(Omega(root d.log N)) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N = d(3) in our case) with 0, 1-coefficients such that for any representation of a polynomial f in this family of the form f = Sigma(i) Pi(j) Q(ij), where the Q(ij)'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that Sigma(i,j) (Number of monomials of Q(ij)) >= 2(Omega(root d.log N)). The above mentioned family, which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results 1], 2], 3], 4], 5] and yields an improved quantitative bound as compared to the quasi-polynomial lower bound of 6] and the N-Omega(log log (N)) lower bound in the independent work of 7].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dual representation scheme for performing arithmetic modulo an arbitrary integer M is presented. The coding scheme maps each integer N in the range 0 <= N < M into one of two representations, each being identified by its most significant bit. The encoding of numbers is straightforward and the problem of checking for unused combinations is eliminated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new parallel algorithm for transforming an arithmetic infix expression into a par se tree is presented. The technique is based on a result due to Fischer (1980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A symmetric solution X satisfying the matrix equation XA = AtX is called a symmetrizer of the matrix A. A general algorithm to compute a matrix symmetrizer is obtained. A new multiple-modulus residue arithmetic called floating-point modular arithmetic is described and implemented on the algorithm to compute an error-free matrix symmetrizer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An algorithm that uses integer arithmetic is suggested. It transforms anm ×n matrix to a diagonal form (of the structure of Smith Normal Form). Then it computes a reflexive generalized inverse of the matrix exactly and hence solves a system of linear equations error-free.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of dipolar and quadrupolar couplings for quantum information processing (QIP) by nuclear magnetic resonance (NMR) is described. In these cases, instead of the individual spins being qubits, the 2(n) energy levels of the spin-system can be treated as an n-qubit system. It is demonstrated that QIP in such systems can be carried out using transition-selective pulses, in (CHCN)-C-3, (CH3CN)-C-13, Li-7 (I = 3/2) and Cs-133 (I = 7/2), oriented in liquid crystals yielding 2 and 3 qubit systems. Creation of pseudopure states, implementation of logic gates and arithmetic operations (half-adder and subtractor) have been carried out in these systems using transition-selective pulses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A unique code (called Hensel's code) is derived for a rational number by truncating its infinite p-adic expansion. The four basic arithmetic algorithms for these codes are described and their application to rational matrix computations is demonstrated by solving a system of linear equations exactly, using the Gaussian elimination procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the spectral multiplicities of the standard torus, we endow the Laplace eigenspaces with Gaussian probability measures. This induces a notion of random Gaussian Laplace eigenfunctions on the torus (''arithmetic random waves''). We study the distribution of the nodal length of random eigenfunctions for large eigenvalues, and our primary result is that the asymptotics for the variance is nonuniversal. Our result is intimately related to the arithmetic of lattice points lying on a circle with radius corresponding to the energy.