142 resultados para proof


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We know, from the classical work of Tarski on real closed fields, that elimination is, in principle, a fundamental engine for mechanized deduction. But, in practice, the high complexity of elimination algorithms has limited their use in the realization of mechanical theorem proving. We advocate qualitative theorem proving, where elimination is attractive since most processes of reasoning take place through the elimination of middle terms, and because the computational complexity of the proof is not an issue. Indeed what we need is the existence of the proof and not its mechanization. In this paper, we treat the linear case and illustrate the power of this paradigm by giving extremely simple proofs of two central theorems in the complexity and geometry of linear programming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present two constructions in this paper: (a) a 10-vertex triangulation CP(10)(2) of the complex projective plane CP(2) as a subcomplex of the join of the standard sphere (S(4)(2)) and the standard real projective plane (RP(6)(2), the decahedron), its automorphism group is A(4); (b) a 12-vertex triangulation (S(2) x S(2))(12) of S(2) x S(2) with automorphism group 2S(5), the Schur double cover of the symmetric group S(5). It is obtained by generalized bistellar moves from a simplicial subdivision of the standard cell structure of S(2) x S(2). Both constructions have surprising and intimate relationships with the icosahedron. It is well known that CP(2) has S(2) x S(2) as a two-fold branched cover; we construct the triangulation CP(10)(2) of CP(2) by presenting a simplicial realization of this covering map S(2) x S(2) -> CP(2). The domain of this simplicial map is a simplicial subdivision of the standard cell structure of S(2) x S(2), different from the triangulation alluded to in (b). This gives a new proof that Kuhnel's CP(9)(2) triangulates CP(2). It is also shown that CP(10)(2) and (S(2) x S(2))(12) induce the standard piecewise linear structure on CP(2) and S(2) x S(2) respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Novel designs for two-axis, high-resolution, monolithic inertial sensors are presented in this paper. Monolithic, i.e., joint-less single-piece compliant designs are already common in micromachined inertial sensors such as accelerometers and gyroscopes. Here, compliant mechanisms are used not only to achieve de-coupling between motions along two orthogonal axes but also to amplify the displacements of the proof-mass. Sensitivity and resolution capabilities are enhanced because the amplified motion is used for sensing the measurand. A particular symmetric arrangement of displacement-amplifying compliant mechanisms (DaCMs) leads to de-coupled and amplified motion. An existing DaCM and a new topology-optimized DaCM are presented as a building block in the new arrangement. A spring-mass-lever model is presented as a lumped abstraction of the new arrangement. This model is useful for arriving at the optimal parameters of the DaCM and for performing system-level simulation. The new designs improved the performance by a factor of two or more.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) can be used for providing streaming services. The traffic model for streaming services is different from the commonly used continuously- backlogged model. Each connection specifies a required service rate over an interval of time, k, called the "control horizon". In this paper, our objective is to determine how k DSCH frames should be shared among a set of I connections. We need a scheduler that is efficient and fair and introduce the notion of discrepancy to balance the conflicting requirements of aggregate throughput and fairness. Our motive is to schedule the mobiles in such a way that the schedule minimizes the discrepancy over the k frames. We propose an optimal and computationally efficient algorithm, called STEM+. The proof of the optimality of STEM+, when applied to the UMTS rate sets is the major contribution of this paper. We also show that STEM+ performs better in terms of both fairness and aggregate throughput compared to other scheduling algorithms. Thus, STEM+ achieves both fairness and efficiency and is therefore an appealing algorithm for scheduling streaming connections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we analyze a novel Micro Opto Electro Mechanical Systems (MOEMS) race track resonator based vibration sensor. In this vibration sensor the straight portion of a race track resonator is located at the foot of the cantilever beam with proof mass. As the beam deflects due to vibration, stress induced refractive change in the waveguide located over the beam lead to the wavelength shift providing the measure of vibration. A wavelength shift of 3.19 pm/g in the range of 280 g for a cantilever beam of 1750μm×450m×20μmhas been obtained. The maximum acceleration (breakdown) for these dimensions is 2900g when a safety factor of 2 is taken into account. Since the wavelength of operation is around 1.55μm hybrid integration of source and detector is possible on the same substrate. Also it is less amenable to noise as wavelength shift provides the sensor signal. This type of sensors can be used for aerospace application and other harsh environments with suitable design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the empirical measure LA of the eigenvalues of nonnormal square matrices of the form A(n) = U(n)T(n)V(n), with U(n), V(n) independent Haar distributed on the unitary group and T(n) diagonal. We show that when the empirical measure of the eigenyalues of T(n) converges, and T(n) satisfies some technical conditions, L(An) converges towards a rotationally invariant measure mu on the complex plane whose support is a single ring. In particular, we provide a complete proof of the Feinberg-Zee single ring theorem [6]. We also consider the case where U(n), V(n) are independently Haar distributed on the orthogonal group.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a sound and complete decision procedure for the bounded process cryptographic protocol insecurity problem, based on the notion of normal proofs [2] and classical unification. We also show a result about the existence of attacks with “high” normal cuts. Our proof of correctness provides an alternate proof and new insights into the fundamental result of Rusinowitch and Turuani [9] for the same setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive and study a C(0) interior penalty method for a sixth-order elliptic equation on polygonal domains. The method uses the cubic Lagrange finite-element space, which is simple to implement and is readily available in commercial software. After introducing some notation and preliminary results, we provide a detailed derivation of the method. We then prove the well-posedness of the method as well as derive quasi-optimal error estimates in the energy norm. The proof is based on replacing Galerkin orthogonality with a posteriori analysis techniques. Using this approach, we are able to obtain a Cea-like lemma with minimal regularity assumptions on the solution. Numerical experiments are presented that support the theoretical findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-domain proteins have many advantages with respect to stability and folding inside cells. Here we attempt to understand the intricate relationship between the domain-domain interactions and the stability of domains in isolation. We provide quantitative treatment and proof for prevailing intuitive ideas on the strategies employed by nature to stabilize otherwise unstable domains. We find that domains incapable of independent stability are stabilized by favourable interactions with tethered domains in the multi-domain context. Stability of such folds to exist independently is optimized by evolution. Specific residue mutations in the sites equivalent to inter-domain interface enhance the overall solvation, thereby stabilizing these domain folds independently. A few naturally occurring variants at these sites alter communication between domains and affect stability leading to disease manifestation. Our analysis provides safe guidelines for mutagenesis which have attractive applications in obtaining stable fragments and domain constructs essential for structural studies by crystallography and NMR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present investigation, a strongly bonded strip of an aluminium-magnesium based alloy AA5086 is successfully produced through accumulative roll bonding (ARB). A maximum of up to eight passes has been used for the purpose. Microstructural characterization using electron backscatter diffraction (EBSD) technique indicates the formation of submicron sized (similar to 200-300 nm) subgrains inside the layered microstructure. The material is strongly textured where individual layers possess typical FCC rolling texture components. More than three times enhancement in 0.2% proof stress (PS) has been obtained after 8 passes due to grain refinement and strain hardening. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that the Wiener Tauberian property holds for the Heisenberg Motion group TnBproof are relatively simple.