204 resultados para Polynomial Pencil
Resumo:
We propose a novel technique for robust voiced/unvoiced segment detection in noisy speech, based on local polynomial regression. The local polynomial model is well-suited for voiced segments in speech. The unvoiced segments are noise-like and do not exhibit any smooth structure. This property of smoothness is used for devising a new metric called the variance ratio metric, which, after thresholding, indicates the voiced/unvoiced boundaries with 75% accuracy for 0dB global signal-to-noise ratio (SNR). A novelty of our algorithm is that it processes the signal continuously, sample-by-sample rather than frame-by-frame. Simulation results on TIMIT speech database (downsampled to 8kHz) for various SNRs are presented to illustrate the performance of the new algorithm. Results indicate that the algorithm is robust even in high noise levels.
Resumo:
We study the problem of matching applicants to jobs under one-sided preferences: that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be rnore popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if phi(M,T) >= phi(T, M) for all matchings T, where phi(X, Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based oil the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distributions over matchings in the input graph. The function phi that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if phi(P,Q) >= phi(Q,P) for all mixed matchings Q. We show that popular mixed matchings always exist. and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.
Resumo:
An axis-parallel k-dimensional box is a Cartesian product R-1 x R-2 x...x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a left perpendicular1 + 1/c log n right perpendicular(d-1) approximation ratio for any constant c >= 1 when d >= 2. In most cases, the first step usually is computing a low dimensional box representation of the given graph. Deciding whether the boxicity of a graph is at most 2 itself is NP-hard. We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in left perpendicular(Delta + 2) ln nright perpendicular dimensions, where Delta is the maximum degree of G. This algorithm implies that box(G) <= left perpendicular(Delta + 2) ln nright perpendicular for any graph G. Our bound is tight up to a factor of ln n. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree Delta, we show that for almost all graphs on n vertices, their boxicity is O(d(av) ln n) where d(av) is the average degree.
Resumo:
CTRU, a public key cryptosystem was proposed by Gaborit, Ohler and Sole. It is analogue of NTRU, the ring of integers replaced by the ring of polynomials $\mathbb{F}_2[T]$ . It attracted attention as the attacks based on either LLL algorithm or the Chinese Remainder Theorem are avoided on it, which is most common on NTRU. In this paper we presents a polynomial-time algorithm that breaks CTRU for all recommended parameter choices that were derived to make CTRU secure against popov normal form attack. The paper shows if we ascertain the constraints for perfect decryption then either plaintext or private key can be achieved by polynomial time linear algebra attack.
Resumo:
A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.
Resumo:
In this paper, we propose a new security metric for measuring resilience of a symmetric key distribution scheme in wireless sensor network. A polynomial-based and a novel complete connectivity schemes are proposed and an analytical comparison, in terms of security and connectivity, between the schemes is shown. Motivated by the schemes, we derive general expressions for security and connectivity. A number of conclusions are made using these general expressions.
Resumo:
A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.
Resumo:
Recent advances in structural integrity evaluation have led to the development. of PZT wafer sensors (PWAS) which can be embedded or surface mounted for both acoustic emission (AE) and ultrasonic (UT) modes, which forms an integrated approach for Structural Health Monitoring (SHM) of aerospace structures. For the fabrication of PWAS wafers, soft PZT formulation (SP-5H Grade containing dopants like BA, SM, CA, ZN, Y and HF) were used. The piezoelectric charge constant (d(33)) was measured by a d(33) meter. As a first step towards the final objective of developing Health monitoring methods with embedded PWAS, experiments were conducted on aluminum and composite plates of finite dimensions using PWAS sensors. The AE source was simulated by breaking 0.5mm pencil lead on the surface of a thin plate. Experiments were also conducted with surface mounted PZT films and conventional AE sensors in order to establish the sensitivity of PWAS. A comparison of results of theoretical and experimental work shows good agreement.
Resumo:
Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) = min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) = min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
With the increased utilization of advanced composites in strategic industries, the concept of Structural Health Monitoring (SHM) with its inherent advantages is gaining ground over the conventional methods of NDE and NDI. The most attractive feature of this concept is on-line evaluation using embedded sensors. Consequently, development of methodologies with identification of appropriate sensors such as PVDF films becomes the key for exploiting the new concept. And, of the methods used for on-line evaluation acoustic emission has been most effective. Thus, Acoustic Emission (AE) generated during static tensile loading of glass fiber reinforced plastic composites was monitored using a Polyvinylidene fluoride (PVDF) film sensor. The frequency response of the film sensor was obtained with pencil lead breakage tests to choose the appropriate band of operation. The specimen considered for the experiments were chosen to characterize the differences in the operation of the failure mechanisms through AE parametric analysis. The results of the investigations can be characterized using AE parameter indicating that a PVDF film sensor was effective as an AE sensor used in structural health monitoring on-line.
Resumo:
The purpose of this article is to report the experience of design and testing of orifice plate-based flow measuring systems for evaluation of air leakages in components of air conditioning systems. Two of the flow measuring stations were designed with a beta value of 0.405 and 0.418. The third was a dual path unit with orifice plates of beta value 0.613 and 0.525. The flow rates covered with all the four were from 4-94 l/s and the range of Reynolds numbers is from 5600 to 76,000. The coefficients of discharge were evaluated and compared with the Stolz equation. Measured C-d values are generally higher than those obtained from the equation, the deviations being larger in the low Reynolds number region. Further, it is observed that a second-degree polynomial is inadequate to relate the pressure drop and flow rate. The lower Reynolds number limits set by standards appear to be somewhat conservative.