887 resultados para nonlinear optimization problems
Resumo:
A neural network model for solving constrained nonlinear optimization problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are completed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of constrained nonlinear optimization problems. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.
Design and analysis of an efficient neural network model for solving nonlinear optimization problems
Resumo:
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.
Resumo:
Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements. Neural networks with feedback connections provide a computing model capable of solving a rich class of optimization problems. In this paper, a modified Hopfield network is developed for solving constrained nonlinear optimization problems. The internal parameters of the network are obtained using the valid-subspace technique. Simulated examples are presented as an illustration of the proposed approach.
Resumo:
We consider Lipschitz continuous-time nonlinear optimization problems and provide first-order necessary optimality conditions of both Fritz John and Karush-Kuhn-Tucker types. (C) 2001 Elsevier B.V. Ltd. All rights reserved.
Resumo:
We discuss sufficient conditions of optimality for nonsmooth continuous-time nonlinear optimization problems under generalized convexity assumptions. These include both first-order and second-order criteria. (C) 1998 Academic Press.
Resumo:
Neural networks consist of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of recurrent neural net-works that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its inter-nal parameters are computed explicitly using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points, which represent a solution of the problem considered. The problems that can be treated by the proposed approach include combinatorial optimiza-tion problems, dynamic programming problems, and nonlinear optimization problems.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
A lower-bound limit analysis formulation, by using two-dimensional finite elements, the three-dimensional Mohr-Coulomb yield criterion, and nonlinear optimization, has been given to deal with an axisymmetric geomechanics stability problem. The optimization was performed using an interior point method based on the logarithmic barrier function. The yield surface was smoothened (1) by removing the tip singularity at the apex of the pyramid in the meridian plane and (2) by eliminating the stress discontinuities at the corners of the yield hexagon in the pi-plane. The circumferential stress (sigma(theta)) need not be assumed. With the proposed methodology, for a circular footing, the bearing-capacity factors N-c, N-q, and N-gamma for different values of phi have been computed. For phi = 0, the variation of N-c with changes in the factor m, which accounts for a linear increase of cohesion with depth, has been evaluated. Failure patterns for a few cases have also been drawn. The results from the formulation provide a good match with the solutions available from the literature. (C) 2014 American Society of Civil Engineers.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Exact penalties for variational inequalities with applications to nonlinear complementarity problems
Resumo:
In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.
Resumo:
This paper presents an efficient approach based on recurrent neural network for solving nonlinear optimization. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A neural model for solving nonlinear optimization problems is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The network is shown to be completely stable and globally convergent to the solutions of nonlinear optimization problems. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are presented to validate the developed methodology.
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel method using artificial neural networks to solve robust parameter estimation problems for nonlinear models with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel barrier method using artificial neural networks to solve robust parameter estimation problems for nonlinear model with unknown-but-bounded errors and uncertainties. This problem can be represented by a typical constrained optimization problem. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.