868 resultados para Nonlinear constrained optimization problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel barrier method using artificial neural networks to solve robust parameter estimation problems for nonlinear model with unknown-but-bounded errors and uncertainties. This problem can be represented by a typical constrained optimization problem. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements. Neural networks with feedback connections provide a computing model capable of solving a rich class of optimization problems. In this paper, a modified Hopfield network is developed for solving constrained nonlinear optimization problems. The internal parameters of the network are obtained using the valid-subspace technique. Simulated examples are presented as an illustration of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variational inequalities and related problems may be solved via smooth bound constrained optimization. A comprehensive discussion of the important features involved with this strategy is presented. Complementarity problems and mathematical programming problems with equilibrium constraints are included in this report. Numerical experiments are commented. Conclusions and directions of future research are indicated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerically discretized dynamic optimization problems having active inequality and equality path constraints that along with the dynamics induce locally high index differential algebraic equations often cause the optimizer to fail in convergence or to produce degraded control solutions. In many applications, regularization of the numerically discretized problem in direct transcription schemes by perturbing the high index path constraints helps the optimizer to converge to usefulm control solutions. For complex engineering problems with many constraints it is often difficult to find effective nondegenerat perturbations that produce useful solutions in some neighborhood of the correct solution. In this paper we describe a numerical discretization that regularizes the numerically consistent discretized dynamics and does not perturb the path constraints. For all values of the regularization parameter the discretization remains numerically consistent with the dynamics and the path constraints specified in the, original problem. The regularization is quanti. able in terms of time step size in the mesh and the regularization parameter. For full regularized systems the scheme converges linearly in time step size.The method is illustrated with examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the reliability optimization of a spatially redundant system, subject to various constraints, by using nonlinear programming. The constrained optimization problem is converted into a sequence of unconstrained optimization problems by using a penalty function. The new problem is then solved by the conjugate gradient method. The advantages of this method are highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new `generalized model predictive static programming (G-MPSP)' technique is presented in this paper in the continuous time framework for rapidly solving a class of finite-horizon nonlinear optimal control problems with hard terminal constraints. A key feature of the technique is backward propagation of a small-dimensional weight matrix dynamics, using which the control history gets updated. This feature, as well as the fact that it leads to a static optimization problem, are the reasons for its high computational efficiency. It has been shown that under Euler integration, it is equivalent to the existing model predictive static programming technique, which operates on a discrete-time approximation of the problem. Performance of the proposed technique is demonstrated by solving a challenging three-dimensional impact angle constrained missile guidance problem. The problem demands that the missile must meet constraints on both azimuth and elevation angles in addition to achieving near zero miss distance, while minimizing the lateral acceleration demand throughout its flight path. Both stationary and maneuvering ground targets are considered in the simulation studies. Effectiveness of the proposed guidance has been verified by considering first order autopilot lag as well as various target maneuvers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new generalized model predictive static programming technique is presented for rapidly solving a class of finite-horizon nonlinear optimal control problems with hard terminal constraints. Two key features for its high computational efficiency include one-time backward integration of a small-dimensional weighting matrix dynamics, followed bya static optimization formulation that requires only a static Lagrange multiplier to update the control history. It turns out that under Euler integration and rectangular approximation of finite integrals it is equivalent to the existing model predictive static programming technique. In addition to the benchmark double integrator problem, usefulness of the proposed technique is demonstrated by solving a three-dimensional angle-constrained guidance problem for an air-to-ground missile, which demands that the missile must meet constraints on both azimuth and elevation angles at the impact point in addition to achieving near-zero miss distance, while minimizing the lateral acceleration demand throughout its flight path. Simulation studies include maneuvering ground targets along with a first-order autopilot lag. Comparison studies with classical augmented proportional navigation guidance and modern general explicit guidance lead to the conclusion that the proposed guidance is superior to both and has a larger capture region as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finding the optimal value for a problem is usual in many areas of knowledge where in many cases it is needed to solve Nonlinear Optimization Problems. For some of those problems it is not possible to determine the expression for its objective function and/or its constraints, they are the result of experimental procedures, might be non-smooth, among other reasons. To solve such problems it was implemented an API contained methods to solve both constrained and unconstrained problems. This API was developed to be used either locally on the computer where the application is being executed or remotely on a server. To obtain the maximum flexibility both from the programmers’ and users’ points of view, problems can be defined as a Java class (because this API was developed in Java) or as a simple text input that is sent to the API. For this last one to be possible it was also implemented on the API an expression evaluator. One of the drawbacks of this expression evaluator is that it is slower than the Java native code. In this paper it is presented a solution that combines both options: the problem can be expressed at run-time as a string of chars that are converted to Java code, compiled and loaded dynamically. To wide the target audience of the API, this new expression evaluator is also compatible with the AMPL format.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an efficient neural network for solving constrained nonlinear optimization problems. More specifically, a two-stage neural network architecture is developed and its internal parameters are computed using the valid-subspace technique. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty or weighting parameters for its initialization.