911 resultados para Discrete-Time Optimal Control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model to study a fungi population submitted to chemical control, incorporating the fungicide application directly into the model. From that, we obtain an optimal control strategy that minimizes both the fungicide application (cost) and leaf area damaged by fungi population during the interval between the moment when the disease is detected (t = 0) and the time of harvest (t = t(f)). Initially, the parameters of the model are considered constant. Later, we consider the apparent infection rate depending on the time (and the temperature) and do some simulations to illustrate and to compare with the constant case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Maximum Principle is derived for a class of optimal control problems arising in midcourse guidance, in which certain controls are represented by measures and, the state trajectories are functions of bounded variation. The optimality conditions improves on previous optimality conditions by allowing nonsmooth data, measurable time dependence, and a possibly time varying constraint set for the conventional controls.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider an infinite horizon optimal impulsive control problems for which a given cost function is minimized by choosing control strategies driving the state to a point in a given closed set C ∞. We present necessary conditions of optimality in the form of a maximum principle for which the boundary condition of the adjoint variable is such that non-degeneracy due to the fact that the time horizon is infinite is ensured. These conditions are given for conventional systems in a first instance and then for impulsive control problems. They are proved by considering a family of approximating auxiliary interval conventional (without impulses) optimal control problems defined on an increasing sequence of finite time intervals. As far as we know, results of this kind have not been derived previously. © 2010 IFAC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a control method that is effective to reduce the degenerative effects of delay time caused by a treacherous network. In present application a controlled DC motor is part of an inverted pendulum and provides the equilibrium of this system. The control of DC motor is accomplished at the distance through a treacherous network, which causes delay time in the control signal. A predictive technique is used so that it turns the system free of delay. A robust digital sliding mode controller is proposed to control the free-delay system. Due to the random conditions of the network operation, a delay time detection and accommodation strategy is also proposed. A computer simulation is shown to illustrate the design procedures and the effectiveness of the proposed method. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A probabilistic indirect adaptive controller is proposed for the general nonlinear multivariate class of discrete time system. The proposed probabilistic framework incorporates input–dependent noise prediction parameters in the derivation of the optimal control law. Moreover, because noise can be nonstationary in practice, the proposed adaptive control algorithm provides an elegant method for estimating and tracking the noise. For illustration purposes, the developed method is applied to the affine class of nonlinear multivariate discrete time systems and the desired result is obtained: the optimal control law is determined by solving a cubic equation and the distribution of the tracking error is shown to be Gaussian with zero mean. The efficiency of the proposed scheme is demonstrated numerically through the simulation of an affine nonlinear system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given that landfills are depletable and replaceable resources, the right approach, when dealing with landfill management, is that of designing an optimal sequence of landfills rather than designing every single landfill separately. In this paper we use Optimal Control models, with mixed elements of both continuous and discrete time problems, to determine an optimal sequence of landfills, as regarding their capacity and lifetime. The resulting optimization problems involve splitting a time horizon of planning into several subintervals, the length of which has to be decided. In each of the subintervals some costs, the amount of which depends on the value of the decision variables, have to be borne. The obtained results may be applied to other economic problems such as private and public investments, consumption decisions on durable goods, etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous work demonstrated that a mixture of NH(4)Cl and KNO(3) as nitrogen source was beneficial to fed-batch Arthrospira (Spirulina) platensis cultivation, in terms of either lower costs or higher cell concentration. On the basis of those results, this study focused on the use of a cheaper nitrogen source mixture, namely (NH(4))(2)SO(4) plus NaNO(3), varying the ammonium feeding time (T = 7-15 days), either controlling the pH by CO(2) addition or not. A. platensis was cultivated in mini-tanks at 30 degrees C, 156 mu mol photons m(-2) s(-1), and starting cell concentration of 400 mg L(-1), on a modified Schlosser medium. T = 13 days under pH control were selected as optimum conditions, ensuring the best results in terms of biomass production (maximum cell concentration of 2911 mg L(-1), cell productivity of 179 mg L(-1) d(-1) and specific growth rate of 0.77 d(-1)) and satisfactory protein and lipid contents (around 30% each). (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Control of chaos in the single-mode optically pumped far-infrared (NH3)-N-15 laser is experimentally demonstrated using continuous time-delay control. Both the Lorenz spiral chaos and the detuned period-doubling chaos exhibited by the laser have been controlled. While the laser is in the Lorenz spiral chaos regime the chaos has been controlled both such that the laser output is cw, with corrections of only a fraction of a percent necessary to keep it there, and to period one. The laser has also been controlled while in the period-doubling chaos regime, to both the period-one and -two states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of the rate of change of the pollution stock in determining the damage to the environment has been an issue of increasing concern in the literature. This paper uses a three-sector (economy, population and environment), non-linear, discrete time, calibrated model to examine pollution control. The model explicitly links economic growth to the health of the environment. The stock of natural resources is affected by the rate of pollution flows, through their impact on the regenerative capacity of the natural resource stock. This can shed useful insights into pollution control strategies, particularly in developing countries where environmental resources are crucial for production in many sectors of the economy. Simulation exercises suggested that, under plausible assumptions, it is possible to reverse undesirable transient dynamics through pollution control expenditure, but this is dependent upon the strategies used for control. The best strategy is to spend money fostering the development of production technologies that reduce pollution rather than spending money dealing with the effects of the pollution flow into the environment. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of fractional calculus goes back to the beginning of the theory of differential calculus, but its application received attention only recently. In the area of automatic control some work was developed, but the proposed algorithms are still in a research stage. This paper discusses a novel method, with two degrees of freedom, for the design of fractional discrete-time derivatives. The performance of several approximations of fractional derivatives is investigated in the perspective of nonlinear system control.