972 resultados para Strictly hyperbolic polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a new security metric for measuring resilience of a symmetric key distribution scheme in wireless sensor network. A polynomial-based and a novel complete connectivity schemes are proposed and an analytical comparison, in terms of security and connectivity, between the schemes is shown. Motivated by the schemes, we derive general expressions for security and connectivity. A number of conclusions are made using these general expressions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carotid atherosclerotic disease is a major cause of stroke, but it may remain clinically asymptomatic. The factors that turn the asymptomatic plaque into a symptomatic one are not fully understood, neither are the subtle effects that a high-grade carotid stenosis may have on the brain. The purpose of this study was to evaluate brain microcirculation, diffusion, and cognitive performance in patients with a high-grade stenosis in carotid artery, clinically either symptomatic or asymptomatic, undergoing carotid endarterectomy (CEA). We wanted to find out whether the stenoses are associated with diffusion or perfusion abnormalities of the brain or variation in the cognitive functioning of the patients, and to what extent the potential findings are affected by CEA, and compare the clinically symptomatic and asymptomatic subjects as well as strictly healthy controls. Coagulation and fibrinolytic parameters were compared with the rate microembolic signals (MES) in transcranial Doppler (TCD) and the macroscopic appearance of stenosing plaques in surgery. Patients (n=92) underwent CEA within the study. Blood samples pertaining to coagulation and fibrinolysis were collected before CEA, and the subjects underwent repeated TCD monitoring for MES. A subpopulation (n= 46) underwent MR imaging and repeated neuropsychological examination (preoperative, as well 4 and 100 days after CEA). In MRI, the average apparent diffusion coefficients were higher in the ipsilateral white matter (WM), and altough the interhemispheric difference was abolished by CEA, the levels remained higher than in controls. Symptomatic stenoses were associated with more sluggish perfusion especially in WM, and lower pulsatility of flow in TCD. All patients had poorer cognitive performance than healthy controls. Cognitive functions improved as expected by learning effect despite transient postoperative worsening in a few subjects. Improvement was greater in patients with deepest hypoperfusion, primarily in executive functions. Symptomatic stenoses were associated with higher hematocrit and tissue plasminogen activator antigen levels, as well as higher rate of MES and ulcerated plaques, and better postoperative improvement of vasoreactivity and pulsatility. In light of the findings, carotid stenosis is associated with differences in brain diffusion, perfusion, and cognition. The effect on diffusion in the ipsilateral WM, partially reversible by CEA, may be associated with WM degeneration. Asymptomatic and symptomatic subpopulations differ from each other in terms of hemodynamic adaptation and in their vascular physiological response to removal of stenosis. Although CEA may be associated with a transient cognitive decline, a true improvement of cognitive performance by CEA is possible in patients with the most pronounced perfusion deficits. Mediators of fibrinolysis and unfavourable hemorheology may contribute to the development of a symptomatic disease in patients with a high-grade stenosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The projection construction has been used to construct semifields of odd characteristic using a field and a twisted semifield [Commutative semi-fields from projection mappings, Designs, Codes and Cryptography, 61 (2011), 187{196]. We generalize this idea to a projection construction using two twisted semifields to construct semifields of odd characteristic. Planar functions and semifields have a strong connection so this also constructs new planar functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) =  min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) =  min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we design a novel auction which we call the OPT (optimal) auction. The OPT mechanism maximizes the search engine's expected revenue while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We show that the OPT mechanism is superior to two of the most commonly used mechanisms for sponsored search namely (1) GSP (Generalized Second Price) and (2) VCG (Vickrey-Clarke-Groves). We then show an important revenue equivalence result that the expected revenue earned by the search engine is the same for all the three mechanisms provided the advertisers are symmetric and the number of sponsored slots is strictly less than the number of advertisers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the problems of computing a minimum co-cycle basis and a minimum weakly fundamental co-cycle basis of a directed graph G. A co-cycle in G corresponds to a vertex partition (S,V ∖ S) and a { − 1,0,1} edge incidence vector is associated with each co-cycle. The vector space over ℚ generated by these vectors is the co-cycle space of G. Alternately, the co-cycle space is the orthogonal complement of the cycle space of G. The minimum co-cycle basis problem asks for a set of co-cycles that span the co-cycle space of G and whose sum of weights is minimum. Weakly fundamental co-cycle bases are a special class of co-cycle bases, these form a natural superclass of strictly fundamental co-cycle bases and it is known that computing a minimum weight strictly fundamental co-cycle basis is NP-hard. We show that the co-cycle basis corresponding to the cuts of a Gomory-Hu tree of the underlying undirected graph of G is a minimum co-cycle basis of G and it is also weakly fundamental.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Linear Processing Complex Orthogonal Design (LPCOD) is a p x n matrix epsilon, (p >= n) in k complex indeterminates x(1), x(2),..., x(k) such that (i) the entries of epsilon are complex linear combinations of 0, +/- x(i), i = 1,..., k and their conjugates, (ii) epsilon(H)epsilon = D, where epsilon(H) is the Hermitian (conjugate transpose) of epsilon and D is a diagonal matrix with the (i, i)-th diagonal element of the form l(1)((i))vertical bar x(1)vertical bar(2) + l(2)((i))vertical bar x(2)vertical bar(2)+...+ l(k)((i))vertical bar x(k)vertical bar(2) where l(j)((i)), i = 1, 2,..., n, j = 1, 2,...,k are strictly positive real numbers and the condition l(1)((i)) = l(2)((i)) = ... = l(k)((i)), called the equal-weights condition, holds for all values of i. For square designs it is known. that whenever a LPCOD exists without the equal-weights condition satisfied then there exists another LPCOD with identical parameters with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1. This implies that the maximum possible rate for square LPCODs without the equal-weights condition is the same as that or square LPCODs with equal-weights condition. In this paper, this result is extended to a subclass of non-square LPCODs. It is shown that, a set of sufficient conditions is identified such that whenever a non-square (p > n) LPCOD satisfies these sufficient conditions and do not satisfy the equal-weights condition, then there exists another LPCOD with the same parameters n, k and p in the same complex indeterminates with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hard Custom, Hard Dance: Social Organisation, (Un)Differentiation and Notions of Power in a Tabiteuean Community, Southern Kiribati is an ethnographic study of a village community. This work analyses social organisation on the island of Tabiteuea in the Micronesian state of Kiribati, examining the intertwining of hierarchical and egalitarian traits, meanwhile bringing a new perspective to scholarly discussions of social differentiation by introducing the concept of undifferentiation to describe non-hierarchical social forms and practices. Particular attention is paid to local ideas concerning symbolic power, abstractly understood as the potency for social reproduction, but also examined in one of its forms; authority understood as the right to speak. The workings of social differentiation and undifferentiation in the village are specifically studied in two contexts connected by local notions of power: the meetinghouse institution (te maneaba) and traditional dancing (te mwaie). This dissertation is based on 11 months of anthropological fieldwork in 1999‒2000 in Kiribati and Fiji, with an emphasis on participant observation and the collection of oral tradition (narratives and songs). The questions are approached through three distinct but interrelated topics: (i) A key narrative of the community ‒ the story of an ancestor without descendants ‒ is presented and discussed, along with other narratives. (ii) The Kiribati meetinghouse institution, te maneaba, is considered in terms of oral tradition as well as present-day practices and customs. (iii) Kiribati dancing (te mwaie) is examined through a discussion of competing dance groups, followed by an extended case study of four dance events. In the course of this work the community of close to four hundred inhabitants is depicted as constructed primarily of clans and households, but also of churches, work co-operatives and dance groups, but also as a significant and valued social unit in itself, and a part of the wider island district. In these partly cross-cutting and overlapping social matrices, people are alternatingly organised by the distinct values and logic of differentiation and undifferentiation. At different levels of social integration and in different modes of social and discursive practice, there are heightened moments of differentiation, followed by active undifferentiation. The central notions concerning power and authority to emerge are, firstly, that in order to be valued and utilised, power needs to be controlled. Secondly, power is not allowed to centralize in the hands of one person or group for any long period of time. Thirdly, out of the permanent reach of people, power/authority is always, on the one hand, left outside the factual community and, on the other, vested in community, the social whole. Several forms of differentiation and undifferentiation emerge, but these appear to be systematically related. Social differentiation building on typically Austronesian complementary differences (such as male:female, elder:younger, autochtonous:allotochtonous) is valued, even if eventually restricted, whereas differentiation based on non-complementary differences (such as monetary wealth or level of education) is generally resisted, and/or is subsumed by the complementary distinctions. The concomitant forms of undifferentiation are likewise hierarchically organised. On the level of the society as a whole, undifferentiation means circumscribing and ultimately withholding social hierarchy. Potential hierarchy is both based on a combination of valued complementary differences between social groups and individuals, but also limited by virtue of the undoing of these differences; for example, in the dissolution of seniority (elder-younger) and gender (male-female) into sameness. Like the suspension of hierarchy, undifferentiation as transformation requires the recognition of pre-existing difference and does not mean devaluing the difference. This form of undifferentiation is ultimately encompassed by the first one, as the processes of the differentiation, whether transformed or not, are always halted. Finally, undifferentiation can mean the prevention of non-complementary differences between social groups or individuals. This form of undifferentiation, like the differentiation it works on, takes place on a lower level of societal ideology, as both the differences and their prevention are always encompassed by the complementary differences and their undoing. It is concluded that Southern Kiribati society be seen as a combination of a severely limited and decentralised hierarchy (differentiation) and of a tightly conditional and contextual (intra-category) equality (undifferentiation), and that it is distinctly characterised by an enduring tension between these contradicting social forms and cultural notions. With reference to the local notion of hardness used to characterise custom on this particular island as well as dance in general, it is argued in this work that in this Tabiteuean community some forms of differentiation are valued though strictly delimited or even undone, whereas other forms of differentiation are a perceived as a threat to community, necessitating pre-emptive imposition of undifferentiation. Power, though sought after and displayed - particularly in dancing - must always remain controlled.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.