985 resultados para Convex Operator
Resumo:
We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.
Resumo:
We consider the equation Delta(2)u = g(x, u) >= 0 in the sense of distribution in Omega' = Omega\textbackslash {0} where u and -Delta u >= 0. Then it is known that u solves Delta(2)u = g(x, u) + alpha delta(0) - beta Delta delta(0), for some nonnegative constants alpha and beta. In this paper, we study the existence of singular solutions to Delta(2)u = a(x) f (u) + alpha delta(0) - beta Delta delta(0) in a domain Omega subset of R-4, a is a nonnegative measurable function in some Lebesgue space. If Delta(2)u = a(x) f (u) in Omega', then we find the growth of the nonlinearity f that determines alpha and beta to be 0. In case when alpha = beta = 0, we will establish regularity results when f (t) <= Ce-gamma t, for some C, gamma > 0. This paper extends the work of Soranzo (1997) where the author finds the barrier function in higher dimensions (N >= 5) with a specific weight function a(x) = |x|(sigma). Later, we discuss its analogous generalization for the polyharmonic operator.
Resumo:
In this article, we survey several kinds of trace formulas that one encounters in the theory of single and multi-variable operators. We give some sketches of the proofs, often based on the principle of finite-dimensional approximations to the objects at hand in the formulas.
Resumo:
Let Gamma subset of SL2(Z) be a principal congruence subgroup. For each sigma is an element of SL2(Z), we introduce the collection A(sigma)(Gamma) of modular Hecke operators twisted by sigma. Then, A(sigma)(Gamma) is a right A(Gamma)-module, where A(Gamma) is the modular Hecke algebra introduced by Connes and Moscovici. Using the action of a Hopf algebra h(0) on A(sigma)(Gamma), we define reduced Rankin-Cohen brackets on A(sigma)(Gamma). Moreover A(sigma)(Gamma) carries an action of H 1, where H 1 is the Hopf algebra of foliations of codimension 1. Finally, we consider operators between the levels A(sigma)(Gamma), sigma is an element of SL2(Z). We show that the action of these operators can be expressed in terms of a Hopf algebra h(Z).
Resumo:
Reynolds averaged Navier-Stokes model performances in the stagnation and wake regions for turbulent flows with relatively large Lagrangian length scales (generally larger than the scale of geometrical features) approaching small cylinders (both square and circular) is explored. The effective cylinder (or wire) diameter based Reynolds number, ReW ≤ 2.5 × 103. The following turbulence models are considered: a mixing-length; standard Spalart and Allmaras (SA) and streamline curvature (and rotation) corrected SA (SARC); Secundov's νt-92; Secundov et al.'s two equation νt-L; Wolfshtein's k-l model; the Explicit Algebraic Stress Model (EASM) of Abid et al.; the cubic model of Craft et al.; various linear k-ε models including those with wall distance based damping functions; Menter SST, k-ω and Spalding's LVEL model. The use of differential equation distance functions (Poisson and Hamilton-Jacobi equation based) for palliative turbulence modeling purposes is explored. The performance of SA with these distance functions is also considered in the sharp convex geometry region of an airfoil trailing edge. For the cylinder, with ReW ≈ 2.5 × 103 the mixing length and k-l models give strong turbulence production in the wake region. However, in agreement with eddy viscosity estimates, the LVEL and Secundov νt-92 models show relatively little cylinder influence on turbulence. On the other hand, two equation models (as does the one equation SA) suggest the cylinder gives a strong turbulence deficit in the wake region. Also, for SA, an order or magnitude cylinder diameter decrease from ReW = 2500 to 250 surprisingly strengthens the cylinder's disruptive influence. Importantly, results for ReW ≪ 250 are virtually identical to those for ReW = 250 i.e. no matter how small the cylinder/wire its influence does not, as it should, vanish. Similar tests for the Launder-Sharma k-ε, Menter SST and k-ω show, in accordance with physical reality, the cylinder's influence diminishing albeit slowly with size. Results suggest distance functions palliate the SA model's erroneous trait and improve its predictive performance in wire wake regions. Also, results suggest that, along the stagnation line, such functions improve the SA, mixing length, k-l and LVEL results. For the airfoil, with SA, the larger Poisson distance function increases the wake region turbulence levels by just under 5%. © 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper the authors prove that the generalized positive p selfadjoint (GPpS) operators in Banach space satisfy the generalized Schwarz inequality, solve the maximal dissipative extension representation of p dissipative operators in Banach space by using the inequality and introducing the generalized indefinite inner product (GIIP) space, and apply the result to a certain type of Schrodinger operator.
Resumo:
In 1972, Maschler, Peleg and Shapley proved that in the class of convex the nucleolus and the kernel coincide. The only aim of this note is to provide a shorter, alternative proof of this result.
Resumo:
We prove that the SD-prenucleolus satisfies monotonicity in the class of convex games. The SD-prenucleolus is thus the only known continuous core concept that satisfies monotonicity for convex games. We also prove that for convex games the SD-prenucleolus and the SD-prekernel coincide.
Correction of probe pressure artifacts in freehand 3D ultrasound - further results and convex probes
Resumo:
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer
Resumo:
Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.
Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.
The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.