869 resultados para Almost Sure Convergence
Resumo:
Given n is an element of Z(+) and epsilon > 0, we prove that there exists delta = delta(epsilon, n) > 0 such that the following holds: If (M(n),g) is a compact Kahler n-manifold whose sectional curvatures K satisfy -1 -delta <= K <= -1/4 and c(I)(M), c(J)(M) are any two Chern numbers of M, then |c(I)(M)/c(J)(M) - c(I)(0)/c(J)(0)| < epsilon, where c(I)(0), c(J)(0) are the corresponding characteristic numbers of a complex hyperbolic space form. It follows that the Mostow-Siu surfaces and the threefolds of Deraux do not admit Kahler metrics with pinching close to 1/4.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
For more than two hundred years, the world has discussed the issue of whether to continue the process of patenting or whether to do away with it. Developed countries remain polarized for various reasons but nevertheless the pro patent regime continued. The result was a huge volume of patents. The present article explains the implications of excessive volume of patents and conditions under which prior art search fails. This article highlights the importance and necessity of standardization efforts so as to bring about convergence of views on patenting.
Resumo:
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Resumo:
A class of model reference adaptive control system which make use of an augmented error signal has been introduced by Monopoli. Convergence problems in this attractive class of systems have been investigated in this paper using concepts from hyperstability theory. It is shown that the condition on the linear part of the system has to be stronger than the one given earlier. A boundedness condition on the input to the linear part of the system has been taken into account in the analysis - this condition appears to have been missed in the previous applications of hyperstability theory. Sufficient conditions for the convergence of the adaptive gain to the desired value are also given.
Resumo:
Vicsek et al. proposed a biologically inspired model of self-propelled particles, which is now commonly referred to as the Vicsek model. Recently, attention has been directed at modifying the Vicsek model so as to improve convergence properties. In this paper, we propose two modification of the Vicsek model which leads to significant improvements in convergence times. The modifications involve an additional term in the heading update rule which depends only on the current or the past states of the particle's neighbors. The variation in convergence properties as the parameters of these modified versions are changed are closely investigated. It is found that in both cases, there exists an optimal value of the parameter which reduces convergence times significantly and the system undergoes a phase transition as the value of the parameter is increased beyond this optimal value. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We present a heterogeneous finite element method for the solution of a high-dimensional population balance equation, which depends both the physical and the internal property coordinates. The proposed scheme tackles the two main difficulties in the finite element solution of population balance equation: (i) spatial discretization with the standard finite elements, when the dimension of the equation is more than three, (ii) spurious oscillations in the solution induced by standard Galerkin approximation due to pure advection in the internal property coordinates. The key idea is to split the high-dimensional population balance equation into two low-dimensional equations, and discretize the low-dimensional equations separately. In the proposed splitting scheme, the shape of the physical domain can be arbitrary, and different discretizations can be applied to the low-dimensional equations. In particular, we discretize the physical and internal spaces with the standard Galerkin and Streamline Upwind Petrov Galerkin (SUPG) finite elements, respectively. The stability and error estimates of the Galerkin/SUPG finite element discretization of the population balance equation are derived. It is shown that a slightly more regularity, i.e. the mixed partial derivatives of the solution has to be bounded, is necessary for the optimal order of convergence. Numerical results are presented to support the analysis.
Resumo:
Edge-preserving smoothing is widely used in image processing and bilateral filtering is one way to achieve it. Bilateral filter is a nonlinear combination of domain and range filters. Implementing the classical bilateral filter is computationally intensive, owing to the nonlinearity of the range filter. In the standard form, the domain and range filters are Gaussian functions and the performance depends on the choice of the filter parameters. Recently, a constant time implementation of the bilateral filter has been proposed based on raisedcosine approximation to the Gaussian to facilitate fast implementation of the bilateral filter. We address the problem of determining the optimal parameters for raised-cosine-based constant time implementation of the bilateral filter. To determine the optimal parameters, we propose the use of Stein's unbiased risk estimator (SURE). The fast bilateral filter accelerates the search for optimal parameters by faster optimization of the SURE cost. Experimental results show that the SURE-optimal raised-cosine-based bilateral filter has nearly the same performance as the SURE-optimal standard Gaussian bilateral filter and the Oracle mean squared error (MSE)-based optimal bilateral filter.
Resumo:
Domain swapping is an interesting feature of some oligomeric proteins in which each protomer of the oligomer provides an identical surface for exclusive interaction with a segment or domain belonging to another protomer. Here we report results of mutagenesis experiments on the structure of C-terminal helix swapped dimer of a stationary phase survival protein from Salmonella typhimurium (StSurE). Wild type StSurE is a dimer in which a large helical segment at the C-terminus and a tetramerization loop comprising two beta strands are swapped between the protomers. Key residues in StSurE that might promote C-terminal helix swapping were identified by sequence and structural comparisons. Three mutants in which the helix swapping is likely to be avoided were constructed and expressed in E. coli. Three-dimensional X-ray crystal structures of the mutants H234A and D230A/H234A could be determined at 2.1 angstrom and 2.35 angstrom resolutions, respectively. Contrary to expectations, helix swapping was mostly retained in both the mutants. The loss of the crucial D230 OD2- H234 NE2 hydrogen bond (2.89 angstrom in the wild type structure) in the hinge region was compensated by new inter and intra-chain interactions. However, the two fold molecular symmetry was lost and there were large conformational changes throughout the polypeptide. In spite of these changes, the dimeric structure and an approximate tetrameric organization were retained, probably due to the interactions involving the tetramerization loop. Mutants were mostly functionally inactive, highlighting the importance of precise inter-subunit interactions for the symmetry and function of StSurE.
Resumo:
Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.
Resumo:
We suggest a method of studying coherence in finite-level systems coupled to the environment and use it for the Hamiltonian that has been used to describe the light-harvesting pigment-protein complex. The method works with the adiabatic states and transforms the Hamiltonian to a form in which the terms responsible for decoherence and population relaxation are separated out. Decoherence is then accounted for nonperturbatively and population relaxation using a Markovian master equation. Almost analytical results can be obtained for the seven-level system, and the calculations are very simple for systems with more levels. We apply the treatment to the seven-level system, and the results are in excellent agreement with the exact numerical results of Nalbach et al. Nalbach, Braun, and Thorwart, Phys. Rev. E 84, 041926 (2011)]. Our approach is able to account for decoherence and population relaxation separately. It is found that decoherence causes only damping of oscillations and does not lead to transfer to the reaction center. Population relaxation is necessary for efficient transfer to the reaction center, in agreement with earlier findings. Our results show that the transformation to the adiabatic basis followed by a Redfield type of approach leads to results in good agreement with exact simulation.
Resumo:
We use the Bouguer coherence (Morlet isostatic response function) technique to compute the spatial variation of effective elastic thickness (T-e) of the Andaman subduction zone. The recovered T-e map resolves regional-scale features that correlate well with known surface structures of the subducting Indian plate and the overriding Burma plate. The major structure on the India plate, the Ninetyeast Ridge (NER), exhibits a weak mechanical strength, which is consistent with the expected signature of an oceanic ridge of hotspot origin. However, a markedly low strength (0< T-e <3 km) in that region, where the NER is close to the Andaman trench (north of 10 N), receives our main attention in this study. The subduction geometry derived from the Bouguer gravity forward modeling suggests that the NER has indented beneath the Andaman arc. We infer that the bending stresses of the viscous plate, which were reinforced within the subducting oceanic plate as a result of the partial subduction of the NER buoyant load, have reduced the lithospheric strength. The correlation, T-e < T-s (seismogenic thickness) reveals that the upper crust is actively deforming beneath the frontal arc Andaman region. The occurrence of normal-fault earthquakes in the frontal arc, low Te zone, is indicative of structural heterogeneities within the subducting plate. The fact that the NER along with its buoyant root is subducting under the Andaman region is inhibiting the subduction processes, as suggested by the changes in trench line, interrupted back-arc volcanism, variation in seismicity mechanism, slow subduction, etc. The low T-e and thinned crustal structure of the Andaman back-arc basin are attributed to a thermomechanically weakened lithosphere. The present study reveals that the ongoing back-arc spreading and strike-slip motion along the West Andaman Fault coupled with the ridge subduction exerts an important control on the frequency and magnitude of seismicity in the Andaman region. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
This article addresses the problem of determining the shortest path that connects a given initial configuration (position, heading angle, and flight path angle) to a given rectilinear or a circular path in three-dimensional space for a constant speed and turn-rate constrained aerial vehicle. The final path is assumed to be located relatively far from the starting point. Due to its simplicity and low computational requirements the algorithm can be implemented on a fixed-wing type unmanned air vehicle in real time in missions where the final path may change dynamically. As wind has a very significant effect on the flight of small aerial vehicles, the method of optimal path planning is extended to meet the same objective in the presence of wind comparable to the speed of the aerial vehicles. But, if the path to be followed is closer to the initial point, an off-line method based on multiple shooting, in combination with a direct transcription technique, is used to obtain the optimal solution. Optimal paths are generated for a variety of cases to show the efficiency of the algorithm. Simulations are presented to demonstrate tracking results using a 6-degrees-of-freedom model of an unmanned air vehicle.