954 resultados para Polynomial functions
Resumo:
In this paper we consider polynomial representability of functions defined over , where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that (i) answers the decision problem: to determine whether a given function over is polynomially representable or not, and (ii) finds the polynomial if it is polynomially representable. The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240-266, 1921) and Carlitz (Acta Arith. 9(1), 67-78, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.
Resumo:
Building on a proof by D. Handelman of a generalisation of an example due to L. Fuchs, we show that the space of real-valued polynomials on a non-empty set X of reals has the Riesz Interpolation Property if and only if X is bounded.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
Given a function from Z(n) to itself one can determine its polynomial representability by using Kempner function. In this paper we present an alternative characterization of polynomial functions over Z(n) by constructing a generating set for the Z(n)-module of polynomial functions. This characterization results in an algorithm that is faster on average in deciding polynomial representability. We also extend the characterization to functions in several variables. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Resumo:
In this paper, the governing equations for free vibration of a non-homogeneous rotating Timoshenko beam, having uniform cross-section, is studied using an inverse problem approach, for both cantilever and pinned-free boundary conditions. The bending displacement and the rotation due to bending are assumed to be simple polynomials which satisfy all four boundary conditions. It is found that for certain polynomial variations of the material mass density, elastic modulus and shear modulus, along the length of the beam, the assumed polynomials serve as simple closed form solutions to the coupled second order governing differential equations with variable coefficients. It is found that there are an infinite number of analytical polynomial functions possible for material mass density, shear modulus and elastic modulus distributions, which share the same frequency and mode shape for a particular mode. The derived results are intended to serve as benchmark solutions for testing approximate or numerical methods used for the vibration analysis of rotating non-homogeneous Timoshenko beams.
Resumo:
Maintenance activities in a large-scale engineering system are usually scheduled according to the lifetimes of various components in order to ensure the overall reliability of the system. Lifetimes of components can be deduced by the corresponding probability distributions with parameters estimated from past failure data. While failure data of the components is not always readily available, the engineers have to be content with the primitive information from the manufacturers only, such as the mean and standard deviation of lifetime, to plan for the maintenance activities. In this paper, the moment-based piecewise polynomial model (MPPM) are proposed to estimate the parameters of the reliability probability distribution of the products when only the mean and standard deviation of the product lifetime are known. This method employs a group of polynomial functions to estimate the two parameters of the Weibull Distribution according to the mathematical relationship between the shape parameter of two-parameters Weibull Distribution and the ratio of mean and standard deviation. Tests are carried out to evaluate the validity and accuracy of the proposed methods with discussions on its suitability of applications. The proposed method is particularly useful for reliability-critical systems, such as railway and power systems, in which the maintenance activities are scheduled according to the expected lifetimes of the system components.
Resumo:
Irradiance profile around the receiver tube (RT) of a parabolic trough collector (PTC) is a key effect of optical performance that affects the overall energy performance of the collector. Thermal performance evaluation of the RT relies on the appropriate determination of the irradiance profile. This article explains a technique in which empirical equations were developed to calculate the local irradiance as a function of angular location of the RT of a standard PTC using a vigorously verified Monte Carlo ray tracing model. A large range of test conditions including daily normal insolation, spectral selective coatings and glass envelop conditions were selected from the published data by Dudley et al. [1] for the job. The R2 values of the equations are excellent that vary in between 0.9857 and 0.9999. Therefore, these equations can be used confidently to produce realistic non-uniform boundary heat flux profile around the RT at normal incidence for conjugate heat transfer analyses of the collector. Required values in the equations are daily normal insolation, and the spectral selective properties of the collector components. Since the equations are polynomial functions, data processing software can be employed to calculate the flux profile very easily and quickly. The ultimate goal of this research is to make the concentrating solar power technology cost competitive with conventional energy technology facilitating its ongoing research.
Resumo:
Thin plate spline finite element methods are used to fit a surface to an irregularly scattered dataset [S. Roberts, M. Hegland, and I. Altas. Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions. SIAM, 1:208--234, 2003]. The computational bottleneck for this algorithm is the solution of large, ill-conditioned systems of linear equations at each step of a generalised cross validation algorithm. Preconditioning techniques are investigated to accelerate the convergence of the solution of these systems using Krylov subspace methods. The preconditioners under consideration are block diagonal, block triangular and constraint preconditioners [M. Benzi, G. H. Golub, and J. Liesen. Numerical solution of saddle point problems. Acta Numer., 14:1--137, 2005]. The effectiveness of each of these preconditioners is examined on a sample dataset taken from a known surface. From our numerical investigation, constraint preconditioners appear to provide improved convergence for this surface fitting problem compared to block preconditioners.
Resumo:
The foliage of a plant performs vital functions. As such, leaf models are required to be developed for modelling the plant architecture from a set of scattered data captured using a scanning device. The leaf model can be used for purely visual purposes or as part of a further model, such as a fluid movement model or biological process. For these reasons, an accurate mathematical representation of the surface and boundary is required. This paper compares three approaches for fitting a continuously differentiable surface through a set of scanned data points from a leaf surface, with a technique already used for reconstructing leaf surfaces. The techniques which will be considered are discrete smoothing D2-splines [R. Arcangeli, M. C. Lopez de Silanes, and J. J. Torrens, Multidimensional Minimising Splines, Springer, 2004.], the thin plate spline finite element smoother [S. Roberts, M. Hegland, and I. Altas, Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions, SIAM, 1 (2003), pp. 208--234] and the radial basis function Clough-Tocher method [M. Oqielat, I. Turner, and J. Belward, A hybrid Clough-Tocher method for surface fitting with application to leaf data., Appl. Math. Modelling, 33 (2009), pp. 2582-2595]. Numerical results show that discrete smoothing D2-splines produce reconstructed leaf surfaces which better represent the original physical leaf.
Resumo:
In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.
Resumo:
In this paper, we seek to find nonrotating beams that are isospectral to a given tapered rotating beam. Isospectral structures have identical natural frequencies. We assume the mass and stiffness distributions of the tapered rotating beam to be polynomial functions of span. Such polynomial variations of mass and stiffness are typical of helicopter and wind turbine blades. We use the Barcilon-Gottlieb transformation to convert the fourth-order governing equations of the rotating and the nonrotating beams, from the (x, Y) frame of reference to a hypothetical (z, U) frame of reference. If the coefficients of both the equations in the (z, U) frame match with each other, then the nonrotating beam is isospectral to the given rotating beam. The conditions on matching the coefficients lead to a pair of coupled differential equations. Wesolve these coupled differential equations numerically using the fourth-order Runge-Kutta scheme. We also verify that the frequencies (given in the literature) of standard tapered rotating beams are the frequencies (obtained using the finite-element analysis) of the isospectral nonrotating beams. Finally, we present an example of beams having a rectangular cross-section to show the application of our analysis. Since experimental determination of rotating beam frequencies is a difficult task, experiments can be easily conducted on these isospectral nonrotating beams to calculate the frequencies of the rotating beam.
Resumo:
This work employed a clayey, silty, sandy gravel contaminated with a mixture of metals (Cd, Cu, Pb, Ni and Zn) and diesel. The contaminated soil was treated with 5 and 10% dosages of different cementitious binders. The binders include Portland cement, cement-fly ash, cement-slag and lime-slag mixtures. Monolithic leaching from the treated soils was evaluated over a 64-day period alongside granular leachability of 49- and 84-day old samples. Surface wash-off was the predominant leaching mechanism for monolithic samples. In this condition, with data from different binders and curing ages combined, granular leachability as a function of monolithic leaching generally followed degrees 4 and 6 polynomial functions. The only exception was for Cu, which followed the multistage dose-response model. The relationship between both leaching tests varied with the type of metal, curing age/residence time of monolithic samples in the leachant, and binder formulation. The results provide useful design information on the relationship between leachability of metals from monolithic forms of S/S treated soils and the ultimate leachability in the eventual breakdown of the stabilized/solidified soil.