868 resultados para Nonlinear constrained optimization problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new approach and coding scheme for solving economic dispatch problems (ED) in power systems through an effortless hybrid method (EHM). This novel coding scheme can effectively prevent futile searching and also prevents obtaining infeasible solutions through the application of stochastic search methods, consequently dramatically improves search efficiency and solution quality. The dominant constraint of an economic dispatch problem is power balance. The operational constraints, such as generation limitations, ramp rate limits, prohibited operating zones (POZ), network loss are considered for practical operation. Firstly, in the EHM procedure, the output of generator is obtained with a lambda iteration method and without considering POZ and later in a genetic based algorithm this constraint is satisfied. To demonstrate its efficiency, feasibility and fastness, the EHM algorithm was applied to solve constrained ED problems of power systems with 6 and 15 units. The simulation results obtained from the EHM were compared to those achieved from previous literature in terms of solution quality and computational efficiency. Results reveal that the superiority of this method in both aspects of financial and CPU time. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes an architecture of a new multiagent system framework for hybridization of metaheuristics inspired on the general Particle Swarm Optimization framework (PSO). The main contribution is to propose an effective approach to solve hard combinatory optimization problems. The choice of PSO as inspiration was given because it is inherently multiagent, allowing explore the features of multiagent systems, such as learning and cooperation techniques. In the proposed architecture, particles are autonomous agents with memory and methods for learning and making decisions, using search strategies to move in the solution space. The concepts of position and velocity originally defined in PSO are redefined for this approach. The proposed architecture was applied to the Traveling Salesman Problem and to the Quadratic Assignment Problem, and computational experiments were performed for testing its effectiveness. The experimental results were promising, with satisfactory performance, whereas the potential of the proposed architecture has not been fully explored. For further researches, the proposed approach will be also applied to multiobjective combinatorial optimization problems, which are closer to real-world problems. In the context of applied research, we intend to work with both students at the undergraduate level and a technical level in the implementation of the proposed architecture in real-world problems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to great difficulty of accurate solution of Combinatorial Optimization Problems, some heuristic methods have been developed and during many years, the analysis of performance of these approaches was not carried through in a systematic way. The proposal of this work is to make a statistical analysis of heuristic approaches to the Traveling Salesman Problem (TSP). The focus of the analysis is to evaluate the performance of each approach in relation to the necessary computational time until the attainment of the optimal solution for one determined instance of the TSP. Survival Analysis, assisted by methods for the hypothesis test of the equality between survival functions was used. The evaluated approaches were divided in three classes: Lin-Kernighan Algorithms, Evolutionary Algorithms and Particle Swarm Optimization. Beyond those approaches, it was enclosed in the analysis, a memetic algorithm (for symmetric and asymmetric TSP instances) that utilizes the Lin-Kernighan heuristics as its local search procedure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-objective combinatorial optimization problems have peculiar characteristics that require optimization methods to adapt for this context. Since many of these problems are NP-Hard, the use of metaheuristics has grown over the last years. Particularly, many different approaches using Ant Colony Optimization (ACO) have been proposed. In this work, an ACO is proposed for the Multi-objective Shortest Path Problem, and is compared to two other optimizers found in the literature. A set of 18 instances from two distinct types of graphs are used, as well as a specific multiobjective performance assessment methodology. Initial experiments showed that the proposed algorithm is able to generate better approximation sets than the other optimizers for all instances. In the second part of this work, an experimental analysis is conducted, using several different multiobjective ACO proposals recently published and the same instances used in the first part. Results show each type of instance benefits a particular type of instance benefits a particular algorithmic approach. A new metaphor for the development of multiobjective ACOs is, then, proposed. Usually, ants share the same characteristics and only few works address multi-species approaches. This works proposes an approach where multi-species ants compete for food resources. Each specie has its own search strategy and different species do not access pheromone information of each other. As in nature, the successful ant populations are allowed to grow, whereas unsuccessful ones shrink. The approach introduced here shows to be able to inherit the behavior of strategies that are successful for different types of problems. Results of computational experiments are reported and show that the proposed approach is able to produce significantly better approximation sets than other methods

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a new approach to minimize the losses in electrical power systems. This approach considers the application of the primal-dual logarithmic barrier method to voltage magnitude and tap-changing transformer variables, and the other inequality constraints are treated by augmented Lagrangian method. The Lagrangian function aggregates all the constraints. The first-order necessary conditions are reached by Newton's method, and by updating the dual variables and penalty factors. Test results are presented to show the good performance of this approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of model reduction for uncertain discrete-time systems with convex bounded (polytope type) uncertainty. A reduced order precisely known model is obtained in such a way that the H2 and/or the H∞ guaranteed norm of the error between the original (uncertain) system and the reduced one is minimized. The optimization problems are formulated in terms of coupled (non-convex) LMIs - Linear Matrix Inequalities, being solved through iterative algorithms. Examples illustrate the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points which represent solutions (not necessarily optimal) for the dynamic programming problem. Simulated examples are presented and compared with other neural networks. The results demonstrate that proposed method gives a significant improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a method for solving the Short Term Transmission Network Expansion Planning (STTNEP) problem is presented. The STTNEP is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In this work we present a constructive heuristic algorithm to find a solution of the STTNEP of excellent quality. In each step of the algorithm a sensitivity index is used to add a circuit (transmission line or transformer) to the system. This sensitivity index is obtained solving the STTNEP problem considering as a continuous variable the number of circuits to be added (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an interior points method that uses a combination of the multiple predictor corrector and multiple centrality corrections methods, both belonging to the family of higher order interior points method (HOIPM). Tests were carried out using a modified Carver system and the results presented show the good performance of both the constructive heuristic algorithm to solve the STTNEP problem and the HOIPM used in each step.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliability of power supply is related, among other factors, to the control and protection devices allocation in feeders of distribution systems. In this way, optimized allocation of sectionalizing switches and protection devices in strategic points of distribution circuits, improves the quality of power supply and the system reliability indices. In this work, it is presented a mixed integer non-linear programming (MINLP) model, with real and binary variables, for the sectionalizing switches and protection devices allocation problem, in strategic sectors, aimed at improving reliability indices, increasing the utilities billing and fulfilling exigencies of regulatory agencies for the power supply. Optimized allocation of protection devices and switches for restoration, allows that those faulted sectors of the system can be isolated and repaired, re-managing loads of the analyzed feeder into the set of neighbor feeders. Proposed solution technique is a Genetic Algorithm (GA) developed exploiting the physical characteristics of the problem. Results obtained through simulations for a real-life circuit, are presented. © 2004 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new approach for solving constraint optimization problems (COP) based on the philosophy of lexicographical goal programming. A two-phase methodology for solving COP using a multi-objective strategy is used. In the first phase, the objective function is completely disregarded and the entire search effort is directed towards finding a single feasible solution. In the second phase, the problem is treated as a bi-objective optimization problem, turning the constraint optimization into a two-objective optimization. The two resulting objectives are the original objective function and the constraint violation degree. In the first phase a methodology based on progressive hardening of soft constraints is proposed in order to find feasible solutions. The performance of the proposed methodology was tested on 11 well-known benchmark functions.