972 resultados para Strictly hyperbolic polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Levi geometry at weakly pseudoconvex boundary points of domains in C-n, n >= 3, is sufficiently complicated that there are no universal model domains with which to compare a general domain. Good models may be constructed by bumping outward a pseudoconvex, finite- type Omega subset of C-3 in such a way that: (i) pseudoconvexity is preserved, (ii) the (locally) larger domain has a simpler defining function, and (iii) the lowest possible orders of contact of the bumped domain with partial derivative Omega, at the site of the bumping, are realized. When Omega subset of C-n, n >= 3, it is, in general, hard to meet the last two requirements. Such well-controlled bumping is possible when Omega is h-extendible/semiregular. We examine a family of domains in C-3 that is strictly larger than the family of h-extendible/semiregular domains and construct explicit models for these domains by bumping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dilaton action in 3 + 1 dimensions plays a crucial role in the proof of the a-theorem. This action arises using Wess-Zumino consistency conditions and crucially relies on the existence of the trace anomaly. Since there are no anomalies in odd dimensions, it is interesting to ask how such an action could arise otherwise. Motivated by this we use the AdS/CFT correspondence to examine both even and odd dimensional conformal field theories. We find that in even dimensions, by promoting the cutoff to a field, one can get an action for this field which coincides with the Wess-Zumino action in flat space. In three dimensions, we observe that by finding an exact Hamilton-Jacobi counterterm, one can find a non-polynomial action which is invariant under global Weyl rescalings. We comment on how this finding is tied up with the F-theorem conjectures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we discuss SU(N) Chern-Simons theories at level k with both fermionic and bosonic vector matter. In particular we present an exact calculation of the free energy of the N = 2 supersymmetric model (with one chiral field) for all values of the `t Hooft coupling in the large N limit. This is done by using a generalization of the standard Hubbard-Stratanovich method because the SUSY model contains higher order polynomial interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A unit cube in (or a k-cube in short) is defined as the Cartesian product R (1) x R (2) x ... x R (k) where R (i) (for 1 a parts per thousand currency sign i a parts per thousand currency sign k) is a closed interval of the form a (i) , a (i) + 1] on the real line. A k-cube representation of a graph G is a mapping of the vertices of G to k-cubes such that two vertices in G are adjacent if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G is the minimum k such that G has a k-cube representation. From a geometric embedding point of view, a k-cube representation of G = (V, E) yields an embedding such that for any two vertices u and v, ||f(u) - f(v)||(a) a parts per thousand currency sign 1 if and only if . We first present a randomized algorithm that constructs the cube representation of any graph on n vertices with maximum degree Delta in O(Delta ln n) dimensions. This algorithm is then derandomized to obtain a polynomial time deterministic algorithm that also produces the cube representation of the input graph in the same number of dimensions. The bandwidth ordering of the graph is studied next and it is shown that our algorithm can be improved to produce a cube representation of the input graph G in O(Delta ln b) dimensions, where b is the bandwidth of G, given a bandwidth ordering of G. Note that b a parts per thousand currency sign n and b is much smaller than n for many well-known graph classes. Another upper bound of b + 1 on the cubicity of any graph with bandwidth b is also shown. Together, these results imply that for any graph G with maximum degree Delta and bandwidth b, the cubicity is O(min{b, Delta ln b}). The upper bound of b + 1 is used to derive upper bounds for the cubicity of circular-arc graphs, cocomparability graphs and AT-free graphs in terms of the maximum degree Delta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a complex, additive, white Gaussian noise channel with flat fading. We study its diversity order vs transmission rate for some known power allocation schemes. The capacity region is divided into three regions. For one power allocation scheme, the diversity order is exponential throughout the capacity region. For selective channel inversion (SCI) scheme, the diversity order is exponential in low and high rate region but polynomial in mid rate region. For fast fading case we also provide a new upper bound on block error probability and a power allocation scheme that minimizes it. The diversity order behaviour of this scheme is same as for SCI but provides lower BER than the other policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Boxicity of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional axis parallel boxes in Rk. Equivalently, it is the minimum number of interval graphs on the vertex set V such that the intersection of their edge sets is E. It is known that boxicity cannot be approximated even for graph classes like bipartite, co-bipartite and split graphs below O(n0.5-ε)-factor, for any ε > 0 in polynomial time unless NP = ZPP. Till date, there is no well known graph class of unbounded boxicity for which even an nε-factor approximation algorithm for computing boxicity is known, for any ε < 1. In this paper, we study the boxicity problem on Circular Arc graphs - intersection graphs of arcs of a circle. We give a (2+ 1/k)-factor polynomial time approximation algorithm for computing the boxicity of any circular arc graph along with a corresponding box representation, where k ≥ 1 is its boxicity. For Normal Circular Arc(NCA) graphs, with an NCA model given, this can be improved to an additive 2-factor approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity is O(mn+n2) in both these cases and in O(mn+kn2) which is at most O(n3) time we also get their corresponding box representations, where n is the number of vertices of the graph and m is its number of edges. The additive 2-factor algorithm directly works for any Proper Circular Arc graph, since computing an NCA model for it can be done in polynomial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-dimensional (2D) sheets are currently in the spotlight of nanotechnology owing to high-performance device fabrication possibilities. Building a free-standing quantum sheet with controlled morphology is challenging when large planar geometry and ultranarrow thickness are simultaneously concerned. Coalescence of nanowires into large single-crystalline sheet is a promising approach leading to large, molecularly thick 2D sheets with controlled planar morphology. Here we report on a bottom-up approach to fabricate high-quality ultrathin 2D single crystalline sheets with well-defined rectangular morphology via collective coalescence of PbS nanowires. The ultrathin sheets are strictly rectangular with 1.8 nm thickness, 200-250 nm width, and 3-20 mu m length. The sheets show high electrical conductivity at room and cryogenic temperatures upon device fabrication. Density functional theory (DFT) calculations reveal that a single row of delocalized orbitals of a nanowire is gradually converted into several parallel conduction channels upon sheet formation, which enable superior in-plane carrier conduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a fast learning neural network classifier for human action recognition. The proposed classifier is a fully complex-valued neural network with a single hidden layer. The neurons in the hidden layer employ the fully complex-valued hyperbolic secant as an activation function. The parameters of the hidden layer are chosen randomly and the output weights are estimated analytically as a minimum norm least square solution to a set of linear equations. The fast leaning fully complex-valued neural classifier is used for recognizing human actions accurately. Optical flow-based features extracted from the video sequences are utilized to recognize 10 different human actions. The feature vectors are computationally simple first order statistics of the optical flow vectors, obtained from coarse to fine rectangular patches centered around the object. The results indicate the superior performance of the complex-valued neural classifier for action recognition. The superior performance of the complex neural network for action recognition stems from the fact that motion, by nature, consists of two components, one along each of the axes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on a comprehensive analysis of the renormalization of noncommutative phi(4) scalar field theories on the Groenewold-Moyal plane. These scalar field theories are twisted Poincare invariant. Our main results are that these scalar field theories are renormalizable, free of UV/IR mixing, possess the same fixed points and beta-functions for the couplings as their commutative counterparts. We also argue that similar results hold true for any generic noncommutative field theory with polynomial interactions and involving only pure matter fields. A secondary aim of this work is to provide a comprehensive review of different approaches for the computation of the noncommutative S-matrix: noncommutative interaction picture and noncommutative Lehmann-Symanzik-Zimmermann formalism. DOI: 10.1103/PhysRevD.87.064014