41 resultados para Optimal control problem

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the final value of the expectation and variance of the output. In the first problem it is desired to minimise the final variance of the output subject to a restriction on its final expectation, in the second one it is desired to maximise the final expectation of the output subject to a restriction on its final variance, and in the third one it is considered a performance criterion composed by a linear combination of the final variance and expectation of the output of the system. We present explicit sufficient conditions for the existence of an optimal control strategy for these problems, generalising previous results in the literature. We conclude this article presenting a numerical example of an asset liabilities management model for pension funds with regime switching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the postjump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach. Two examples are presented illustrating the possible applications of the results developed in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper concern the development of a stable model predictive controller (MPC) to be integrated with real time optimization (RTO) in the control structure of a process system with stable and integrating outputs. The real time process optimizer produces Optimal targets for the system inputs and for Outputs that Should be dynamically implemented by the MPC controller. This paper is based oil a previous work (Comput. Chem. Eng. 2005, 29, 1089) where a nominally stable MPC was proposed for systems with the conventional control approach where only the outputs have set points. This work is also based oil the work of Gonzalez et at. (J. Process Control 2009, 19, 110) where the zone control of stable systems is studied. The new control for is obtained by defining ail extended control objective that includes input targets and zone controller the outputs. Additional decision variables are also defined to increase the set of feasible solutions to the control problem. The hard constraints resulting from the cancellation of the integrating modes Lit the end of the control horizon are softened,, and the resulting control problem is made feasible to a large class of unknown disturbances and changes of the optimizing targets. The methods are illustrated with the simulated application of the proposed,approaches to a distillation column of the oil refining industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this technical note we consider the mean-variance hedging problem of a jump diffusion continuous state space financial model with the re-balancing strategies for the hedging portfolio taken at discrete times, a situation that more closely reflects real market conditions. A direct expression based on some change of measures, not depending on any recursions, is derived for the optimal hedging strategy as well as for the ""fair hedging price"" considering any given payoff. For the case of a European call option these expressions can be evaluated in a closed form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper develops H(infinity) control designs based on neural networks for fully actuated and underactuated cooperative manipulators. The neural networks proposed in this paper only adapt the uncertain dynamics of the robot manipulators. They work as a complement of the nominal model. The H(infinity) performance index includes the position errors as well the squeeze force errors between the manipulator end-effectors and the object, which represents a complete disturbance rejection scenario. For the underactuated case, the squeeze force control problem is more difficult to solve due to the loss of some degrees of manipulator actuation. Results obtained from an actual cooperative manipulator, which is able to work as a fully actuated and an underactuated manipulator, are presented. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers two aspects of the nonlinear H(infinity) control problem: the use of weighting functions for performance and robustness improvement, as in the linear case, and the development of a successive Galerkin approximation method for the solution of the Hamilton-Jacobi-Isaacs equation that arises in the output-feedback case. Design of nonlinear H(infinity) controllers obtained by the well-established Taylor approximation and by the proposed Galerkin approximation method applied to a magnetic levitation system are presented for comparison purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work considers the open-loop control problem of steering a two-level quantum system from any initial to any final condition. The model of this system evolves on the state space X = SU(2), having two inputs that correspond to the complex amplitude of a resonant laser field. A symmetry preserving flat output is constructed using a fully geometric construction and quaternion computations. Simulation results of this flatness-based open-loop control are provided.