167 resultados para HVH theorem
Resumo:
We give a simple linear algebraic proof of the following conjecture of Frankl and Furedi [7, 9, 13]. (Frankl-Furedi Conjecture) if F is a hypergraph on X = {1, 2, 3,..., n} such that 1 less than or equal to /E boolean AND F/ less than or equal to k For All E, F is an element of F, E not equal F, then /F/ less than or equal to (i=0)Sigma(k) ((i) (n-1)). We generalise a method of Palisse and our proof-technique can be viewed as a variant of the technique used by Tverberg to prove a result of Graham and Pollak [10, 11, 14]. Our proof-technique is easily described. First, we derive an identity satisfied by a hypergraph F using its intersection properties. From this identity, we obtain a set of homogeneous linear equations. We then show that this defines the zero subspace of R-/F/. Finally, the desired bound on /F/ is obtained from the bound on the number of linearly independent equations. This proof-technique can also be used to prove a more general theorem (Theorem 2). We conclude by indicating how this technique can be generalised to uniform hypergraphs by proving the uniform Ray-Chaudhuri-Wilson theorem. (C) 1997 Academic Press.
Resumo:
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem,we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.
Resumo:
We study the transient response of a colloidal bead which is released from different heights and allowed to relax in the potential well of an optical trap. Depending on the initial potential energy, the system's time evolution shows dramatically different behaviors. Starting from the short-time reversible to long-time irreversible transition, a stationary reversible state with zero net dissipation can be achieved as the release point energy is decreased. If the system starts with even lower energy, it progressively extracts useful work from thermal noise and exhibits an anomalous irreversibility. In addition, we have verified the Transient Fluctuation Theorem and the Integrated Transient Fluctuation Theorem even for the non-ergodic descriptions of our system. Copyright (C) EPLA, 2011
Resumo:
We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.
Resumo:
In this paper we propose that the compressive tidal held in the centers of flat-core early-type galaxies and ultraluminous galaxies compresses molecular clouds producing dense gas observed in the centers of these galaxies. The effect of galactic tidal fields is usually considered disruptive in the literature. However, for some galaxies, the mass profile flattens toward the center and the resulting galactic tidal field is not disruptive, but instead it is compressive within the flat-core region. We have used the virial theorem to determine the minimum density of a molecular cloud to be stable and gravitationally bound within the tidally compressive region of a galaxy. We have applied the mechanism to determine the mean molecular cloud densities in the centers of a sample of flat-core, early-type galaxies and ultraluminous galaxies. For early-type galaxies with a core-type luminosity profile, the tidal held of the galaxy is compressive within half the core radius. We have calculated the mean gas densities for molecular gas in a sample of early-type galaxies which have already been detected in CO emission, and we obtain mean densities of [n] similar to 10(3)-10(6) cm(-3) within the central 100 pc radius. We also use our model to calculate the molecular cloud densities in the inner few hundred parsecs of a sample of ultraluminous galaxies. From the observed rotation curves of these galaxies we show that they have a compressive core within their nuclear region. Our model predicts minimum molecular gas densities in the range 10(2)-10(4) cm(-3) in the nuclear gas disks; the smaller values are applicable typically for galaxies with larger core radii. The resulting density values agree well with the observed range. Also, for large core radii, even fairly low-density gas (similar to 10(2) cm(-3)) can remain bound and stable close to the galactic center.
Resumo:
Consider a sequence of closed, orientable surfaces of fixed genus g in a Riemannian manifold M with uniform upper bounds on the norm of mean curvature and area. We show that on passing to a subsequence, we can choose parametrisations of the surfaces by inclusion maps from a fixed surface of the same genus so that the distance functions corresponding to the pullback metrics converge to a pseudo-metric and the inclusion maps converge to a Lipschitz map. We show further that the limiting pseudo-metric has fractal dimension two. As a corollary, we obtain a purely geometric result. Namely, we show that bounds on the mean curvature, area and genus of a surface F subset of M, together with bounds on the geometry of M, give an upper bound on the diameter of F. Our proof is modelled on Gromov's compactness theorem for J-holomorphic curves.
Resumo:
Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].
Resumo:
In this paper, we develop and analyze C(0) penalty methods for the fully nonlinear Monge-Ampere equation det(D(2)u) = f in two dimensions. The key idea in designing our methods is to build discretizations such that the resulting discrete linearizations are symmetric, stable, and consistent with the continuous linearization. We are then able to show the well-posedness of the penalty method as well as quasi-optimal error estimates using the Banach fixed-point theorem as our main tool. Numerical experiments are presented which support the theoretical results.
Resumo:
Motivated by the need to statically balance the inherent elastic forces in linkages, this paper presents three techniques to statically balance a four-bar linkage loaded by a zero-free-length spring attached between its coupler point and an anchor point on the ground. The number of auxiliary links and balancing springs required for the three techniques is less than or equal to that of the only technique currently in the literature. One of the three techniques does not require auxiliary links. In these techniques, the set of values for the spring constants and the ground-anchor point of the balancing springs can vary over a one-parameter family. Thrice as many balancing choices are available when the cognates are considered. The ensuing numerous options enable a user to choose the most practical solution. To facilitate the evaluation of the balancing choices for all the cognates, Roberts-Chebyshev cognate theorem is extended to statically balanced four-bar linkages. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Sparking potentials have been measured in nitrogen and dry air between coaxial cylindrical electrodes for values of n = R2/R1 = approximately 1 to 30 (R1 = inner electrode radius, R2 = outer electrode radius) in the presence of crossed uniform magnetic fields. The magnetic flux density was varied from 0 to 3000 Gauss. It has been shown that the minimum sparking potentials in the presence of the crossed magnetic field can be evaluated on the basis of the equivalent pressure concept when the secondary ionization coefficient does not vary appreciably with B/p (B = magnetic flux density, p = gas pressure). The values of secondary ionization coefficients �¿B in nitrogen in crossed fields calculated from measured values of sparking potentials and Townsend ionization coefficients taken from the literature, have been reported. The calculated values of collision frequencies in nitrogen from minimum sparking potentials in crossed fields are found to increase with increasing B/p at constant E/pe (pe = equivalent pressure). Studies on the similarity relationship in crossed fields has shown that the similarity theorem is obeyed in dry air for both polarities of the central electrode in crossed fields.
Resumo:
We consider the vector and scalar form factors of the charm-changing current responsible for the semileptonic decay D -> pi/nu. Using as input dispersion relations and unitarity for the moments of suitable heavy-light correlators evaluated with Operator Product Expansions, including O(alpha(2)(s)) terms in perturbative QCD, we constrain the shape parameters of the form factors and find exclusion regions for zeros on the real axis and in the complex plane. For the scalar form factor, a low-energy theorem and phase information on the unitarity cut are also implemented to further constrain the shape parameters. We finally propose new analytic expressions for the D pi form factors, derive constraints on the relevant coefficients from unitarity and analyticity, and briefly discuss the usefulness of the new parametrizations for describing semileptonic data.
Resumo:
In this paper, we treat some eigenvalue problems in periodically perforated domains and study the asymptotic behaviour of the eigenvalues and the eigenvectors when the number of holes in the domain increases to infinity Using the method of asymptotic expansion, we give explicit formula for the homogenized coefficients and expansion for eigenvalues and eigenvectors. If we denote by ε the size of each hole in the domain, then we obtain the following aysmptotic expansion for the eigenvalues: Dirichlet: λε = ε−2 λ + λ0 +O (ε), Stekloff: λε = ελ1 +O (ε2), Neumann: λε = λ0 + ελ1 +O (ε2).Using the method of energy, we prove a theorem of convergence in each case considered here. We briefly study correctors in the case of Neumann eigenvalue problem.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
Finding vertex-minimal triangulations of closed manifolds is a very difficult problem. Except for spheres and two series of manifolds, vertex-minimal triangulations are known for only few manifolds of dimension more than 2 (see the table given at the end of Section 5). In this article, we present a brief survey on the works done in last 30 years on the following:(i) Finding the minimal number of vertices required to triangulate a given pl manifold. (ii) Given positive integers n and d, construction of n-vertex triangulations of different d-dimensional pl manifolds. (iii) Classifications of all the triangulations of a given pl manifold with same number of vertices.In Section 1, we have given all the definitions which are required for the remaining part of this article. A reader can start from Section 2 and come back to Section 1 as and when required. In Section 2, we have presented a very brief history of triangulations of manifolds. In Section 3,we have presented examples of several vertex-minimal triangulations. In Section 4, we have presented some interesting results on triangulations of manifolds. In particular, we have stated the Lower Bound Theorem and the Upper Bound Theorem. In Section 5, we have stated several results on minimal triangulations without proofs. Proofs are available in the references mentioned there. We have also presented some open problems/conjectures in Sections 3 and 5.
Resumo:
We provide some conditions for the graph of a Holder-continuous function on (D) over bar, where (D) over bar is a closed disk in C, to be polynomially convex. Almost all sufficient conditions known to date - provided the function (say F) is smooth - arise from versions of the Weierstrass Approximation Theorem on (D) over bar. These conditions often fail to yield any conclusion if rank(R)DF is not maximal on a sufficiently large subset of (D) over bar. We bypass this difficulty by introducing a technique that relies on the interplay of certain plurisubharmonic functions. This technique also allows us to make some observations on the polynomial hull of a graph in C(2) at an isolated complex tangency.