979 resultados para Constrained nonlinear optimization
Resumo:
This paper presents an efficient neural network for solving constrained nonlinear optimization problems. More specifically, a two-stage neural network architecture is developed and its internal parameters are computed using the valid-subspace technique. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty or weighting parameters for its initialization.
Resumo:
Variational inequalities and related problems may be solved via smooth bound constrained optimization. A comprehensive discussion of the important features involved with this strategy is presented. Complementarity problems and mathematical programming problems with equilibrium constraints are included in this report. Numerical experiments are commented. Conclusions and directions of future research are indicated.
Resumo:
García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model
Resumo:
Dynamical systems that involve impacts frequently arise in engineering. This Letter reports a study of such a system at microscale that consists of a nonlinear resonator operating with an unilateral impact. The microresonators were fabricated on silicon-on-insulator wafers by using a one-mask process and then characterised by using the capacitively driving and sensing method. Numerical results concerning the dynamics of this vibro-impact system were verified by the experiments. Bifurcation analysis was used to provide a qualitative scenario of the system steady-state solutions as a function of both the amplitude and the frequency of the external driving sinusoidal voltage. The results show that the amplitude of resonant peak is levelled off owing to the impact effect and that the bandwidth of impacting is dependent upon the nonlinearity and the operating conditions.
Resumo:
2000 Mathematics Subject Classification: 90C48, 49N15, 90C25
Resumo:
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the epsilon(k)-global minimization of the Augmented Lagrangian with simple constraints, where epsilon(k) -> epsilon. Global convergence to an epsilon-global minimizer of the original problem is proved. The subproblems are solved using the alpha BB method. Numerical experiments are presented.
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Este trabalho apresenta um método de estimativa de torque do joelho baseado em sinais eletromiográficos (EMG) durante terapia de reabilitação robótica. Os EMGs, adquiridos de cinco músculos envolvidos no movimento de flexão e extensão do joelho, são processados para encontrar as ativações musculares. Em seguida, mediante um modelo simples de contração muscular, são calculadas as forças e, usando a geometria da articulação, o torque do joelho. As funções de ativação e contração musculares possuem parâmetros limitados que devem ser calibrados para cada usuário, sendo o ajuste feito mediante a minimização do erro entre o torque estimado e o torque medido na articulação usando a dinâmica inversa. São comparados dois métodos iterativos para funções não-lineares como técnicas de otimização restrita para a calibração dos parâmetros: Gradiente Descendente e Quasi-Newton. O processamento de sinais, calibração de parâmetros e cálculo de torque estimado foram desenvolvidos no software MATLAB®; o cálculo de torque medido foi feito no software OpenSim com sua ferramenta de dinâmica inversa.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
Resumo:
Finding the optimal value for a problem is usual in many areas of knowledge where in many cases it is needed to solve Nonlinear Optimization Problems. For some of those problems it is not possible to determine the expression for its objective function and/or its constraints, they are the result of experimental procedures, might be non-smooth, among other reasons. To solve such problems it was implemented an API contained methods to solve both constrained and unconstrained problems. This API was developed to be used either locally on the computer where the application is being executed or remotely on a server. To obtain the maximum flexibility both from the programmers’ and users’ points of view, problems can be defined as a Java class (because this API was developed in Java) or as a simple text input that is sent to the API. For this last one to be possible it was also implemented on the API an expression evaluator. One of the drawbacks of this expression evaluator is that it is slower than the Java native code. In this paper it is presented a solution that combines both options: the problem can be expressed at run-time as a string of chars that are converted to Java code, compiled and loaded dynamically. To wide the target audience of the API, this new expression evaluator is also compatible with the AMPL format.
Resumo:
This paper presents an efficient approach based on recurrent neural network for solving nonlinear optimization. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A neural model for solving nonlinear optimization problems is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The network is shown to be completely stable and globally convergent to the solutions of nonlinear optimization problems. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are presented to validate the developed methodology.
Resumo:
Иван Гинчев - Класът на ℓ-устойчивите в точка функции, дефиниран в [2] и разширяващ класа на C1,1 функциите, се обобщава от скаларни за векторни функции. Доказани са някои свойства на ℓ-устойчивите векторни функции. Показано е, че векторни оптимизационни задачи с ограничения допускат условия от втори ред изразени чрез посочни производни, което обобщава резултати от [2] и [5].