959 resultados para Piecewise Polynomial Approximation
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
A linear photodiode array spectrometer based, high resolution interrogation technique for fiber Bragg grating sensors is demonstrated. Spline interpolation and Polynomial Approximation Algorithm (PAA) are applied to the data points acquired by the spectrometer to improve the original PAA based interrogation method. Thereby fewer pixels are required to achieve the same resolution as original. Theoretical analysis indicates that if the FWHM of a FBG covers more than 3 pixels, the resolution of central wavelength shift will arrive at less than 1 pm. While the number of pixels increases to 6, the nominal resolution will decrease to 0.001 pm. Experimental result shows that Bragg wavelength resolution of similar to 1 pm is obtained for a FBG with FWHM of similar to 0.2 nm using a spectrometer with a pixel resolution of similar to 70 pm.
Resumo:
The band structure of the Zn1-xCdxSySe1-y quaternary alloy is calculated using the empirical pseudopotential method and the virtual crystal approximation. The alloy is found to be a direct-gap semiconductor for all x and y composition. Polynomial approximation is obtained for the energy gap as a function of the composition x and y. Electron and hole effective masses are also calculated along various symmetry axes for different compositions and the results agree fairly well with available experimental values.
Resumo:
The first thermodynamic dissociation constants of glycine in 5, 15 mass % glucose + water mixed solvents at five temperatures from 5 to 45-degrees-C have been determined from precise emf measurements of a cell without liquid junction using hydrogen and Ag-AgCl electrodes and a new method of polynomial approximation proposed on the basis of Pitzer's electrolytic solution theory in our previous paper. The results obtained from both methods agree within experimental error. The standard free energy of transfer for HCl from water to aqueous mixed solvent have been calculated and the results are discussed.
Resumo:
The emfs of Cu|CuSO_4|Hg_2SO_4-Hg were determined at 5 temperature points from 278.15K to 313.15K. Based on the Pitzer' s Equation a polynomial approximation for the determination of standard emf, E_m, was proposed. The values of E_m obtained by author's method agree with values of E_m obtained by the extended Debye-Huckel equation within experimental errors. Compared with the extrapolation result of extended Debye-Huckel equation, the uncertainty by the selecting of parameter of ion size was avoided.By the...
Resumo:
This thesis is concerned with uniformly convergent finite element and finite difference methods for numerically solving singularly perturbed two-point boundary value problems. We examine the following four problems: (i) high order problem of reaction-diffusion type; (ii) high order problem of convection-diffusion type; (iii) second order interior turning point problem; (iv) semilinear reaction-diffusion problem. Firstly, we consider high order problems of reaction-diffusion type and convection-diffusion type. Under suitable hypotheses, the coercivity of the associated bilinear forms is proved and representation results for the solutions of such problems are given. It is shown that, on an equidistant mesh, polynomial schemes cannot achieve a high order of convergence which is uniform in the perturbation parameter. Piecewise polynomial Galerkin finite element methods are then constructed on a Shishkin mesh. High order convergence results, which are uniform in the perturbation parameter, are obtained in various norms. Secondly, we investigate linear second order problems with interior turning points. Piecewise linear Galerkin finite element methods are generated on various piecewise equidistant meshes designed for such problems. These methods are shown to be convergent, uniformly in the singular perturbation parameter, in a weighted energy norm and the usual L2 norm. Finally, we deal with a semilinear reaction-diffusion problem. Asymptotic properties of solutions to this problem are discussed and analysed. Two simple finite difference schemes on Shishkin meshes are applied to the problem. They are proved to be uniformly convergent of second order and fourth order respectively. Existence and uniqueness of a solution to both schemes are investigated. Numerical results for the above methods are presented.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.
Resumo:
In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.
Resumo:
We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.
Resumo:
The present work aims to study the macroeconomic factors influence in credit risk for installment autoloans operations. The study is based on 4.887 credit operations surveyed in the Credit Risk Information System (SCR) hold by the Brazilian Central Bank. Using Survival Analysis applied to interval censured data, we achieved a model to estimate the hazard function and we propose a method for calculating the probability of default in a twelve month period. Our results indicate a strong time dependence for the hazard function by a polynomial approximation in all estimated models. The model with the best Akaike Information Criteria estimate a positive effect of 0,07% for males over de basic hazard function, and 0,011% for the increasing of ten base points on the operation annual interest rate, toward, for each R$ 1.000,00 on the installment, the hazard function suffer a negative effect of 0,28% , and an estimated elevation of 0,0069% for the same amount added to operation contracted value. For de macroeconomics factors, we find statistically significant effects for the unemployment rate (-0,12%) , for the one lag of the unemployment rate (0,12%), for the first difference of the industrial product index(-0,008%), for one lag of inflation rate (-0,13%) and for the exchange rate (-0,23%). We do not find statistic significant results for all other tested variables.