945 resultados para CONVEX
Resumo:
For some time there is a large interest in variable step-size methods for adaptive filtering. Recently, a few stochastic gradient algorithms have been proposed, which are based on cost functions that have exponential dependence on the chosen error. However, we have experienced that the cost function based on exponential of the squared error does not always satisfactorily converge. In this paper we modify this cost function in order to improve the convergence of exponentiated cost function and the novel ECVSS (exponentiated convex variable step-size) stochastic gradient algorithm is obtained. The proposed technique has attractive properties in both stationary and abrupt-change situations. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A model based on the postreceptor channels followed by a Minkowski norm (Minkowski model) is widely used to fit experimental data on colour discrimination. This model predicts that contours of equal discrimination in colour space are convex and balanced (symmetrical). We have tested these predictions in an experiment. Two new statistical tests have been developed to analyse convexity and balancedness of experimental curves. Using these tests we have found that while they are in line with the convexity prediction, our experimental contours strongly testify against balancedness. It follows that the Minkowski model is, generally, inappropriate to model colour discrimination data. © 2002 Elsevier Science (USA).
Resumo:
A simple, non-seeding and high-yield synthesis of convex gold octahedra with size of ca. 50 nm in aqueous solution is described. The octahedral nanoparticles were systematically prepared by reduction of HAuCl4 using ascorbic acid (AA) in the presence of cetyltrimethylammonium bromide (CTAB) as the stabilizing surfactant while concentrations of Au3+ were fixed. The synthesizing process is especially different to other wet synthesis of metallic nanoparticles because it is mediated by H2O2. Mechanism of the H2O2 – mediated process will be described in details. The gold octahedra were shown to be single crystals with all 8 faces belonging to {111} family. Moreover, the single crystalline particles also showed attractive optical properties towards LSPR that should find uses as labels for microscopic imaging, materials for colorimetric biosensings, or nanosensor developments.
Resumo:
Electric vehicles are a key prospect for future transportation. A large penetration of electric vehicles has the potential to reduce the global fossil fuel consumption and hence the greenhouse gas emissions and air pollution. However, the additional stochastic loads imposed by plug-in electric vehicles will possibly introduce significant changes to existing load profiles. In his paper, electric vehicles loads are integrated into an 5-unit system using a non-convex dynamic dispatch model. The actual infrastructure characteristics including valve-point effects, load balance constrains and transmission loss have been included in the model. Multiple load profiles are comparatively studied and compared in terms of economic and environmental impacts in order o identify patterns to charge properly. The study as expected shows ha off-peak charging is the best scenario with respect to using less fuels and producing less emissions.
Resumo:
The problem of determining a maximum matching or whether there exists a perfect matching, is very common in a large variety of applications and as been extensively studied in graph theory. In this paper we start to introduce a characterisation of a family of graphs for which its stability number is determined by convex quadratic programming. The main results connected with the recognition of this family of graphs are also introduced. It follows a necessary and sufficient condition which characterise a graph with a perfect matching and an algorithmic strategy, based on the determination of the stability number of line graphs, by convex quadratic programming, applied to the determination of a perfect matching. A numerical example for the recognition of graphs with a perfect matching is described. Finally, the above algorithmic strategy is extended to the determination of a maximum matching of an arbitrary graph and some related results are presented.
Resumo:
We consider a convex problem of Semi-Infinite Programming (SIP) with multidimensional index set. In study of this problem we apply the approach suggested in [20] for convex SIP problems with one-dimensional index sets and based on the notions of immobile indices and their immobility orders. For the problem under consideration we formulate optimality conditions that are explicit and have the form of criterion. We compare this criterion with other known optimality conditions for SIP and show its efficiency in the convex case.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
The present study on some infinite convex invariants. The origin of convexity can be traced back to the period of Archimedes and Euclid. At the turn of the nineteenth centaury , convexicity became an independent branch of mathematics with its own problems, methods and theories. The convexity can be sorted out into two kinds, the first type deals with generalization of particular problems such as separation of convex sets[EL], extremality[FA], [DAV] or continuous selection Michael[M1] and the second type involved with a multi- purpose system of axioms. The theory of convex invariants has grown out of the classical results of Helly, Radon and Caratheodory in Euclidean spaces. Levi gave the first general definition of the invariants Helly number and Radon number. The notation of a convex structure was introduced by Jamison[JA4] and that of generating degree was introduced by Van de Vel[VAD8]. We also prove that for a non-coarse convex structure, rank is less than or equal to the generating degree, and also generalize Tverberg’s theorem using infinite partition numbers. Compare the transfinite topological and transfinite convex dimensions
Resumo:
The concept of convex extendability is introduced to answer the problem of finding the smallest distance convex simple graph containing a given tree. A problem of similar type with respect to minimal path convexity is also discussed.
Resumo:
Aitchison and Bacon-Shone (1999) considered convex linear combinations of compositions. In other words, they investigated compositions of compositions, where the mixing composition follows a logistic Normal distribution (or a perturbation process) and the compositions being mixed follow a logistic Normal distribution. In this paper, I investigate the extension to situations where the mixing composition varies with a number of dimensions. Examples would be where the mixing proportions vary with time or distance or a combination of the two. Practical situations include a river where the mixing proportions vary along the river, or across a lake and possibly with a time trend. This is illustrated with a dataset similar to that used in the Aitchison and Bacon-Shone paper, which looked at how pollution in a loch depended on the pollution in the three rivers that feed the loch. Here, I explicitly model the variation in the linear combination across the loch, assuming that the mean of the logistic Normal distribution depends on the river flows and relative distance from the source origins
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to go on or to stop sampling a signal. When the sampled values are homogeneous enough, we assume that they represent the signal fairly well and we do not need further refinement, otherwise more samples are required, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is very sensitive to variability is necessary. In this paper, we present a family of discrimination measures, the f-divergences, meeting this requirement. These convex functions have been well studied and successfully applied to image processing and several areas of engineering. Two applications to global illumination are shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. We obtain significantly better results than with classic criteria, showing that f-divergences are worth further investigation in computer graphics. Also a discrimination measure based on entropy of the samples for refinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a natural method to deal with the adaptive subdivision of the sampling region