930 resultados para Irreducible polynomial
Resumo:
Efficient storage of types within a compiler is necessary to avoid large blowups in space during compilation. Recursive types in particular are important to consider, as naive representations of recursive types may be arbitrarily larger than necessary through unfolding. Hash-consing has been used to efficiently store non-recursive types. Deterministic finite automata techniques have been used to efficiently perform various operations on recursive types. We present a new system for storing recursive types combining hash-consing and deterministic finite automata techniques. The space requirements are linear in the number of distinct types. Both update and lookup operations take polynomial time and linear space and type equality can be checked in constant time once both types are in the system.
Resumo:
We present a type inference algorithm, in the style of compositional analysis, for the language TRAFFIC—a specification language for flow composition applications proposed in [2]—and prove that this algorithm is correct: the typings it infers are principal typings, and the typings agree with syntax-directed type checking on closed flow specifications. This algorithm is capable of verifying partial flow specifications, which is a significant improvement over syntax-directed type checking algorithm presented in [3]. We also show that this algorithm runs efficiently, i.e., in low-degree polynomial time.
Resumo:
We present a procedure to infer a typing for an arbitrary λ-term M in an intersection-type system that translates into exactly the call-by-name (resp., call-by-value) evaluation of M. Our framework is the recently developed System E which augments intersection types with expansion variables. The inferred typing for M is obtained by setting up a unification problem involving both type variables and expansion variables, which we solve with a confluent rewrite system. The inference procedure is compositional in the sense that typings for different program components can be inferred in any order, and without knowledge of the definition of other program components. Using expansion variables lets us achieve a compositional inference procedure easily. Termination of the procedure is generally undecidable. The procedure terminates and returns a typing if the input M is normalizing according to call-by-name (resp., call-by-value). The inferred typing is exact in the sense that the exact call-by-name (resp., call-by-value) behaviour of M can be obtained by a (polynomial) transformation of the typing. The inferred typing is also principal in the sense that any other typing that translates the call-by-name (resp., call-by-value) evaluation of M can be obtained from the inferred typing for M using a substitution-based transformation.
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
This thesis is concerned with uniformly convergent finite element and finite difference methods for numerically solving singularly perturbed two-point boundary value problems. We examine the following four problems: (i) high order problem of reaction-diffusion type; (ii) high order problem of convection-diffusion type; (iii) second order interior turning point problem; (iv) semilinear reaction-diffusion problem. Firstly, we consider high order problems of reaction-diffusion type and convection-diffusion type. Under suitable hypotheses, the coercivity of the associated bilinear forms is proved and representation results for the solutions of such problems are given. It is shown that, on an equidistant mesh, polynomial schemes cannot achieve a high order of convergence which is uniform in the perturbation parameter. Piecewise polynomial Galerkin finite element methods are then constructed on a Shishkin mesh. High order convergence results, which are uniform in the perturbation parameter, are obtained in various norms. Secondly, we investigate linear second order problems with interior turning points. Piecewise linear Galerkin finite element methods are generated on various piecewise equidistant meshes designed for such problems. These methods are shown to be convergent, uniformly in the singular perturbation parameter, in a weighted energy norm and the usual L2 norm. Finally, we deal with a semilinear reaction-diffusion problem. Asymptotic properties of solutions to this problem are discussed and analysed. Two simple finite difference schemes on Shishkin meshes are applied to the problem. They are proved to be uniformly convergent of second order and fourth order respectively. Existence and uniqueness of a solution to both schemes are investigated. Numerical results for the above methods are presented.
Resumo:
The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
We introduce the notion of flat surfaces of finite type in the 3- sphere, give the algebro-geometric description in terms of spectral curves and polynomial Killing fields, and show that finite type flat surfaces generated by curves on S2 with periodic curvature functions are dense in the space of all flat surfaces generated by curves on S2 with periodic curvature functions.
Resumo:
Lovelock terms are polynomial scalar densities in the Riemann curvature tensor that have the remarkable property that their Euler-Lagrange derivatives contain derivatives of the metric of an order not higher than 2 (while generic polynomial scalar densities lead to Euler-Lagrange derivatives with derivatives of the metric of order 4). A characteristic feature of Lovelock terms is that their first nonvanishing term in the expansion g λμ = η λμ + h λμ of the metric around flat space is a total derivative. In this paper, we investigate generalized Lovelock terms defined as polynomial scalar densities in the Riemann curvature tensor and its covariant derivatives (of arbitrarily high but finite order) such that their first nonvanishing term in the expansion of the metric around flat space is a total derivative. This is done by reformulating the problem as a BRST cohomological one and by using cohomological tools. We determine all the generalized Lovelock terms. We find, in fact, that the class of nontrivial generalized Lovelock terms contains only the usual ones. Allowing covariant derivatives of the Riemann tensor does not lead to a new structure. Our work provides a novel algebraic understanding of the Lovelock terms in the context of BRST cohomology. © 2005 IOP Publishing Ltd.
Resumo:
We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.
Resumo:
The goal of this work is to analyze three-dimensional dispersive metallic photonic crystals (PCs) and to find a structure that can provide a bandgap and a high cutoff frequency. The determination of the band structure of a PC with dispersive materials is an expensive nonlinear eigenvalue problem; in this work we propose a rational-polynomial method to convert such a nonlinear eigenvalue problem into a linear eigenvalue problem. The spectral element method is extended to rapidly calculate the band structure of three-dimensional PCs consisting of realistic dispersive materials modeled by Drude and Drude-Lorentz models. Exponential convergence is observed in the numerical experiments. Numerical results show that, at the low frequency limit, metallic materials are similar to a perfect electric conductor, where the simulation results tend to be the same as perfect electric conductor PCs. Band structures of the scaffold structure and semi-woodpile structure metallic PCs are investigated. It is found that band structures of semi-woodpile PCs have a very high cutoff frequency as well as a bandgap between the lowest two bands and the higher bands.
Resumo:
Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.
Resumo:
We prove that the first complex homology of the Johnson subgroup of the Torelli group Tg is a non-trivial, unipotent Tg-module for all g ≥ 4 and give an explicit presentation of it as a Sym H 1(Tg,C)-module when g ≥ 6. We do this by proving that, for a finitely generated group G satisfying an assumption close to formality, the triviality of the restricted characteristic variety implies that the first homology of its Johnson kernel is a nilpotent module over the corresponding Laurent polynomial ring, isomorphic to the infinitesimal Alexander invariant of the associated graded Lie algebra of G. In this setup, we also obtain a precise nilpotence test. © European Mathematical Society 2014.
Resumo:
We present a theory of hypoellipticity and unique ergodicity for semilinear parabolic stochastic PDEs with "polynomial" nonlinearities and additive noise, considered as abstract evolution equations in some Hilbert space. It is shown that if Hörmander's bracket condition holds at every point of this Hilbert space, then a lower bound on the Malliavin covariance operatorμt can be obtained. Informally, this bound can be read as "Fix any finite-dimensional projection on a subspace of sufficiently regular functions. Then the eigenfunctions of μt with small eigenvalues have only a very small component in the image of Π." We also show how to use a priori bounds on the solutions to the equation to obtain good control on the dependency of the bounds on the Malliavin matrix on the initial condition. These bounds are sufficient in many cases to obtain the asymptotic strong Feller property introduced in [HM06]. One of the main novel technical tools is an almost sure bound from below on the size of "Wiener polynomials," where the coefficients are possibly non-adapted stochastic processes satisfying a Lips chitz condition. By exploiting the polynomial structure of the equations, this result can be used to replace Norris' lemma, which is unavailable in the present context. We conclude by showing that the two-dimensional stochastic Navier-Stokes equations and a large class of reaction-diffusion equations fit the framework of our theory.
Resumo:
© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.