939 resultados para Legendre polynomial
Resumo:
A unit cube in (or a k-cube in short) is defined as the Cartesian product R (1) x R (2) x ... x R (k) where R (i) (for 1 a parts per thousand currency sign i a parts per thousand currency sign k) is a closed interval of the form a (i) , a (i) + 1] on the real line. A k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that two vertices in G are adjacent if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G is the minimum k such that G has a k-cube representation. From a geometric embedding point of view, a k-cube representation of G = (V, E) yields an embedding such that for any two vertices u and v, ||f(u) - f(v)||(a) a parts per thousand currency sign 1 if and only if . We first present a randomized algorithm that constructs the cube representation of any graph on n vertices with maximum degree Delta in O(Delta ln n) dimensions. This algorithm is then derandomized to obtain a polynomial time deterministic algorithm that also produces the cube representation of the input graph in the same number of dimensions. The bandwidth ordering of the graph is studied next and it is shown that our algorithm can be improved to produce a cube representation of the input graph G in O(Delta ln b) dimensions, where b is the bandwidth of G, given a bandwidth ordering of G. Note that b a parts per thousand currency sign n and b is much smaller than n for many well-known graph classes. Another upper bound of b + 1 on the cubicity of any graph with bandwidth b is also shown. Together, these results imply that for any graph G with maximum degree Delta and bandwidth b, the cubicity is O(min{b, Delta ln b}). The upper bound of b + 1 is used to derive upper bounds for the cubicity of circular-arc graphs, cocomparability graphs and AT-free graphs in terms of the maximum degree Delta.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
We consider a complex, additive, white Gaussian noise channel with flat fading. We study its diversity order vs transmission rate for some known power allocation schemes. The capacity region is divided into three regions. For one power allocation scheme, the diversity order is exponential throughout the capacity region. For selective channel inversion (SCI) scheme, the diversity order is exponential in low and high rate region but polynomial in mid rate region. For fast fading case we also provide a new upper bound on block error probability and a power allocation scheme that minimizes it. The diversity order behaviour of this scheme is same as for SCI but provides lower BER than the other policies.
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
Boxicity of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional axis parallel boxes in Rk. Equivalently, it is the minimum number of interval graphs on the vertex set V such that the intersection of their edge sets is E. It is known that boxicity cannot be approximated even for graph classes like bipartite, co-bipartite and split graphs below O(n0.5-ε)-factor, for any ε > 0 in polynomial time unless NP = ZPP. Till date, there is no well known graph class of unbounded boxicity for which even an nε-factor approximation algorithm for computing boxicity is known, for any ε < 1. In this paper, we study the boxicity problem on Circular Arc graphs - intersection graphs of arcs of a circle. We give a (2+ 1/k)-factor polynomial time approximation algorithm for computing the boxicity of any circular arc graph along with a corresponding box representation, where k ≥ 1 is its boxicity. For Normal Circular Arc(NCA) graphs, with an NCA model given, this can be improved to an additive 2-factor approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity is O(mn+n2) in both these cases and in O(mn+kn2) which is at most O(n3) time we also get their corresponding box representations, where n is the number of vertices of the graph and m is its number of edges. The additive 2-factor algorithm directly works for any Proper Circular Arc graph, since computing an NCA model for it can be done in polynomial time.
Resumo:
We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.
Resumo:
Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.
Resumo:
We report on a comprehensive analysis of the renormalization of noncommutative phi(4) scalar field theories on the Groenewold-Moyal plane. These scalar field theories are twisted Poincare invariant. Our main results are that these scalar field theories are renormalizable, free of UV/IR mixing, possess the same fixed points and beta-functions for the couplings as their commutative counterparts. We also argue that similar results hold true for any generic noncommutative field theory with polynomial interactions and involving only pure matter fields. A secondary aim of this work is to provide a comprehensive review of different approaches for the computation of the noncommutative S-matrix: noncommutative interaction picture and noncommutative Lehmann-Symanzik-Zimmermann formalism. DOI: 10.1103/PhysRevD.87.064014
Resumo:
In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.
Resumo:
We propose a new set of input voltage equations (IVEs) for independent double-gate MOSFET by solving the governing bipolar Poisson equation (PE) rigorously. The proposed IVEs, which involve the Legendre's incomplete elliptic integral of the first kind and Jacobian elliptic functions and are valid from accumulation to inversion regimes, are shown to have good agreement with the numerical solution of the same PE for all bias conditions.
Resumo:
In this paper we present a segmentation algorithm to extract foreground object motion in a moving camera scenario without any preprocessing step such as tracking selected features, video alignment, or foreground segmentation. By viewing it as a curve fitting problem on advected particle trajectories, we use RANSAC to find the polynomial that best fits the camera motion and identify all trajectories that correspond to the camera motion. The remaining trajectories are those due to the foreground motion. By using the superposition principle, we subtract the motion due to camera from foreground trajectories and obtain the true object-induced trajectories. We show that our method performs on par with state-of-the-art technique, with an execution time speed-up of 10x-40x. We compare the results on real-world datasets such as UCF-ARG, UCF Sports and Liris-HARL. We further show that it can be used toper-form video alignment.
Resumo:
The governing differential equation of the rotating beam reduces to that of a stiff string when the centrifugal force is assumed as constant. The solution of the static homogeneous part of this equation is enhanced with a polynomial term and used in the Rayleighs method. Numerical experiments show better agreement with converged finite element solutions compared to polynomials. Using this as an estimate for the first mode shape, higher mode shape approximations are obtained using Gram-Schmidt orthogonalization. Estimates for the first five natural frequencies of uniform and tapered beams are obtained accurately using a very low order Rayleigh-Ritz approximation.
Resumo:
Transient signals such as plosives in speech or Castanets in audio do not have a specific modulation or periodic structure in time domain. However, in the spectral domain they exhibit a prominent modulation structure, which is a direct consequence of their narrow time localization. Based on this observation, a spectral-domain AM-FM model for transients is proposed. The spectral AM-FM model is built starting from real spectral zero-crossings. The AM and FM correspond to the spectral envelope (SE) and group delay (GD), respectively. Taking into account the modulation structure and spectral continuity, a local polynomial regression technique is proposed to estimate the GD function from the real spectral zeros. The SE is estimated based on the phase function computed from the estimated GD. Since the GD estimation is parametric, the degree of smoothness can be controlled directly. Simulation results based on synthetic transient signals generated using a beta density function are presented to analyze the noise-robustness of the SEGD model. Three specific applications are considered: (1) SEGD based modeling of Castanet sounds; (2) appropriateness of the model for transient compression; and (3) determining glottal closure instants in speech using a short-time SEGD model of the linear prediction residue.
Resumo:
Let I be an m-primary ideal of a Noetherian local ring (R, m) of positive dimension. The coefficient e(1)(I) of the Hilbert polynomial of an I-admissible filtration I is called the Chern number of I. A formula for the Chern number has been derived involving the Euler characteristic of subcomplexes of a Koszul complex. Specific formulas for the Chern number have been given in local rings of dimension at most two. These have been used to provide new and unified proofs of several results about e(1)(I).
Resumo:
Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.