929 resultados para Optimization problems


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract This paper presents a hybrid heuristic{triangle evolution (TE) for global optimization. It is a real coded evolutionary algorithm. As in di®erential evolution (DE), TE targets each individual in current population and attempts to replace it by a new better individual. However, the way of generating new individuals is di®erent. TE generates new individuals in a Nelder- Mead way, while the simplices used in TE is 1 or 2 dimensional. The proposed algorithm is very easy to use and e±cient for global optimization problems with continuous variables. Moreover, it requires only one (explicit) control parameter. Numerical results show that the new algorithm is comparable with DE for low dimensional problems but it outperforms DE for high dimensional problems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = Y Y T , where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a low-rank solution. The factorization X = Y Y T leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second-order optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. © 2010 Society for Industrial and Applied Mathematics.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper addresses the design of algorithms for the collective optimization of a cost function defined over average quantities in the presence of limited communication. We argue that several meaningful collective optimization problems can be formulated in this way. As an application of the proposed approach, we propose a novel algorithm that achieves synchronization or balancing in phase models of coupled oscillators under mild connectedness assumptions on the (possibly time-varying and unidirectional) communication graphs. © 2006 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Solutions for fiber-optical parametric amplifiers (FOPAs) with dispersion fluctuations are derived using matrix operators. On the basis of the propagation matrix product and the hybrid genetic algorithm, we have optimized and compared single- and dual-pump FOPAs with zero-dispersion-wavelength variations. The simulations prove that the design of FOPAs involves multimodal function optimization problems. The numerical results show that dual-pump FOPAs are highly sensitive to dispersion fluctuations whereas dispersion variations have less impact on the gain of single-pump FOPAs. To increase signal gain and reduce ripple, dual-pump FOPAs, instead of single-pump FOPAs, have to be carefully optimized with a suitable multisegment fiber structure rather than a one-segment fiber structure. The different combinations of multisegment fibers can provide highly different gain properties. The increase in gain is at the cost of the ripple.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Quantum annealing is a promising tool for solving optimization problems, similar in some ways to the traditional ( classical) simulated annealing of Kirkpatrick et al. Simulated annealing takes advantage of thermal fluctuations in order to explore the optimization landscape of the problem at hand, whereas quantum annealing employs quantum fluctuations. Intriguingly, quantum annealing has been proved to be more effective than its classical counterpart in many applications. We illustrate the theory and the practical implementation of both classical and quantum annealing - highlighting the crucial differences between these two methods - by means of results recently obtained in experiments, in simple toy-models, and more challenging combinatorial optimization problems ( namely, Random Ising model and Travelling Salesman Problem). The techniques used to implement quantum and classical annealing are either deterministic evolutions, for the simplest models, or Monte Carlo approaches, for harder optimization tasks. We discuss the pro and cons of these approaches and their possible connections to the landscape of the problem addressed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper is concerned with the application of an automated hybrid approach in addressing the university timetabling problem. The approach described is based on the nature-inspired artificial bee colony (ABC) algorithm. An ABC algorithm is a biologically-inspired optimization approach, which has been widely implemented in solving a range of optimization problems in recent years such as job shop scheduling and machine timetabling problems. Although the approach has proven to be robust across a range of problems, it is acknowledged within the literature that there currently exist a number of inefficiencies regarding the exploration and exploitation abilities. These inefficiencies can often lead to a slow convergence speed within the search process. Hence, this paper introduces a variant of the algorithm which utilizes a global best model inspired from particle swarm optimization to enhance the global exploration ability while hybridizing with the great deluge (GD) algorithm in order to improve the local exploitation ability. Using this approach, an effective balance between exploration and exploitation is attained. In addition, a traditional local search approach is incorporated within the GD algorithm with the aim of further enhancing the performance of the overall hybrid method. To evaluate the performance of the proposed approach, two diverse university timetabling datasets are investigated, i.e., Carter's examination timetabling and Socha course timetabling datasets. It should be noted that both problems have differing complexity and different solution landscapes. Experimental results demonstrate that the proposed method is capable of producing high quality solutions across both these benchmark problems, showing a good degree of generality in the approach. Moreover, the proposed method produces best results on some instances as compared with other approaches presented in the literature.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The elastic behavior of the demand consumption jointly used with other available resources such as distributed generation (DG) can play a crucial role for the success of smart grids. The intensive use of Distributed Energy Resources (DER) and the technical and contractual constraints result in large-scale non linear optimization problems that require computational intelligence methods to be solved. This paper proposes a Particle Swarm Optimization (PSO) based methodology to support the minimization of the operation costs of a virtual power player that manages the resources in a distribution network and the network itself. Resources include the DER available in the considered time period and the energy that can be bought from external energy suppliers. Network constraints are considered. The proposed approach uses Gaussian mutation of the strategic parameters and contextual self-parameterization of the maximum and minimum particle velocities. The case study considers a real 937 bus distribution network, with 20310 consumers and 548 distributed generators. The obtained solutions are compared with a deterministic approach and with PSO without mutation and Evolutionary PSO, both using self-parameterization.