995 resultados para Nonlinear problems
Resumo:
A modified formula for the integral transform of a nonlinear function is proposed for a class of nonlinear boundary value problems. The technique presented in this paper results in analytical solutions. Iterations and initial guess, which are needed in other techniques, are not required in this novel technique. The analytical solutions are found to agree surprisingly well with the numerically exact solutions for two examples of power law reaction and Langmuir-Hinshelwood reaction in a catalyst pellet.
Resumo:
We study the continuous problem y"=f(x,y,y'), xc[0,1], 0=G((y(0),y(1)),(y'(0), y'(1))), and its discrete approximation (y(k+1)-2y(k)+y(k-1))/h(2) =f(t(k), y(k), v(k)), k = 1,..., n-1, 0 = G((y(0), y(n)), (v(1), v(n))), where f and G = (g(0), g(1)) are continuous and fully nonlinear, h = 1/n, v(k) = (y(k) - y(k-1))/h, for k =1,..., n, and t(k) = kh, for k = 0,...,n. We assume there exist strict lower and strict upper solutions and impose additional conditions on f and G which are known to yield a priori bounds on, and to guarantee the existence of solutions of the continuous problem. We show that the discrete approximation also has solutions which approximate solutions of the continuous problem and converge to the solution of the continuous problem when it is unique, as the grid size goes to 0. Homotopy methods can be used to compute the solution of the discrete approximation. Our results were motivated by those of Gaines.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
Nonlinear Optimization Problems are usual in many engineering fields. Due to its characteristics the objective function of some problems might not be differentiable or its derivatives have complex expressions. There are even cases where an analytical expression of the objective function might not be possible to determine either due to its complexity or its cost (monetary, computational, time, ...). In these cases Nonlinear Optimization methods must be used. An API, including several methods and algorithms to solve constrained and unconstrained optimization problems was implemented. This API can be accessed not only as traditionally, by installing it on the developer and/or user computer, but it can also be accessed remotely using Web Services. As long as there is a network connection to the server where the API is installed, applications always access to the latest API version. Also an Web-based application, using the proposed API, was developed. This application is to be used by users that do not want to integrate methods in applications, and simply want to have a tool to solve Nonlinear Optimization Problems.
Resumo:
In this work it is presented a systematic procedure for constructing the solution of a large class of nonlinear conduction heat transfer problems through the minimization of quadratic functionals like the ones usually employed for linear descriptions. The proposed procedure gives rise to an efficient and easy way for carrying out numerical simulations of nonlinear heat transfer problems by means of finite elements. To illustrate the procedure a particular problem is simulated by means of a finite element approximation.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
A new spectral method for solving initial boundary value problems for linear and integrable nonlinear partial differential equations in two independent variables is applied to the nonlinear Schrödinger equation and to its linearized version in the domain {x≥l(t), t≥0}. We show that there exist two cases: (a) if l″(t)<0, then the solution of the linear or nonlinear equations can be obtained by solving the respective scalar or matrix Riemann-Hilbert problem, which is defined on a time-dependent contour; (b) if l″(t)>0, then the Riemann-Hilbert problem is replaced by a respective scalar or matrix problem on a time-independent domain. In both cases, the solution is expressed in a spectrally decomposed form.
Resumo:
This paper illustrates how nonlinear programming and simulation tools, which are available in packages such as MATLAB and SIMULINK, can easily be used to solve optimal control problems with state- and/or input-dependent inequality constraints. The method presented is illustrated with a model of a single-link manipulator. The method is suitable to be taught to advanced undergraduate and Master's level students in control engineering.
Resumo:
We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.
Resumo:
In this review I summarise some of the most significant advances of the last decade in the analysis and solution of boundary value problems for integrable partial differential equations in two independent variables. These equations arise widely in mathematical physics, and in order to model realistic applications, it is essential to consider bounded domain and inhomogeneous boundary conditions. I focus specifically on a general and widely applicable approach, usually referred to as the Unified Transform or Fokas Transform, that provides a substantial generalisation of the classical Inverse Scattering Transform. This approach preserves the conceptual efficiency and aesthetic appeal of the more classical transform approaches, but presents a distinctive and important difference. While the Inverse Scattering Transform follows the "separation of variables" philosophy, albeit in a nonlinear setting, the Unified Transform is a based on the idea of synthesis, rather than separation, of variables. I will outline the main ideas in the case of linear evolution equations, and then illustrate their generalisation to certain nonlinear cases of particular significance.
Exact penalties for variational inequalities with applications to nonlinear complementarity problems
Resumo:
In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.
Resumo:
A neural network model for solving constrained nonlinear optimization problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are completed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of constrained nonlinear optimization problems. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.
Design and analysis of an efficient neural network model for solving nonlinear optimization problems
Resumo:
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)