918 resultados para Linear quadratic Gaussian control
Resumo:
The linear quadratic Gaussian control of discrete-time Markov jump linear systems is addressed in this paper, first for state feedback, and also for dynamic output feedback using state estimation. in the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (T N), or the occurrence of a crucial failure event (τ δ), after which the system paralyzed. From the constructive method used here a separation principle holds, and the solutions are given in terms of a Kalman filter and a state feedback sequence of controls. The control gains are obtained by recursions from a set of algebraic Riccati equations for the former case or by a coupled set of algebraic Riccati equation for the latter case. Copyright © 2005 IFAC.
Resumo:
In recent years the analysis and synthesis of (mechanical) control systems in descriptor form has been established. This general description of dynamical systems is important for many applications in mechanics and mechatronics, in electrical and electronic engineering, and in chemical engineering as well. This contribution deals with linear mechanical descriptor systems and its control design with respect to a quadratic performance criterion. Here, the notion of properness plays an important role whether the standard Riccati approach can be applied as usual or not. Properness and non-properness distinguish between the cases if the descriptor system is exclusively governed by the control input or by its higher-order time-derivatives additionally. In the unusual case of non-proper systems a quite different problem of optimal control design has to be considered. Both cases will be solved completely.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
For quantum systems with linear dynamics in phase space much of classical feedback control theory applies. However, there are some questions that are sensible only for the quantum case: Given a fixed interaction between the system and the environment what is the optimal measurement on the environment for a particular control problem? We show that for a broad class of optimal (state- based) control problems ( the stationary linear-quadratic-Gaussian class), this question is a semidefinite program. Moreover, the answer also applies to Markovian (current-based) feedback.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work, the linear and nonlinear feedback control techniques for chaotic systems were been considered. The optimal nonlinear control design problem has been resolved by using Dynamic Programming that reduced this problem to a solution of the Hamilton-Jacobi-Bellman equation. In present work the linear feedback control problem has been reformulated under optimal control theory viewpoint. The formulated Theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations for the Rössler system and the Duffing oscillator are provided to show the effectiveness of this method. Copyright © 2005 by ASME.
Resumo:
Linear parameter varying (LPV) control is a model-based control technique that takes into account time-varying parameters of the plant. In the case of rotating systems supported by lubricated bearings, the dynamic characteristics of the bearings change in time as a function of the rotating speed. Hence, LPV control can tackle the problem of run-up and run-down operational conditions when dynamic characteristics of the rotating system change significantly in time due to the bearings and high vibration levels occur. In this work, the LPV control design for a flexible shaft supported by plain journal bearings is presented. The model used in the LPV control design is updated from unbalance response experimental results and dynamic coefficients for the entire range of rotating speeds are obtained by numerical optimization. Experimental implementation of the designed LPV control resulted in strong reduction of vibration amplitudes when crossing the critical speed, without affecting system behavior in sub- or supercritical speeds. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, an Insulin Infusion Advisory System (IIAS) for Type 1 diabetes patients, which use insulin pumps for the Continuous Subcutaneous Insulin Infusion (CSII) is presented. The purpose of the system is to estimate the appropriate insulin infusion rates. The system is based on a Non-Linear Model Predictive Controller (NMPC) which uses a hybrid model. The model comprises a Compartmental Model (CM), which simulates the absorption of the glucose to the blood due to meal intakes, and a Neural Network (NN), which simulates the glucose-insulin kinetics. The NN is a Recurrent NN (RNN) trained with the Real Time Recurrent Learning (RTRL) algorithm. The output of the model consists of short term glucose predictions and provides input to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. For the development and the evaluation of the IIAS, data generated from a Mathematical Model (MM) of a Type 1 diabetes patient have been used. The proposed control strategy is evaluated at multiple meal disturbances, various noise levels and additional time delays. The results indicate that the implemented IIAS is capable of handling multiple meals, which correspond to realistic meal profiles, large noise levels and time delays.
Resumo:
This paper considers the global synchronisation of a stochastic version of coupled map lattices networks through an innovative stochastic adaptive linear quadratic pinning control methodology. In a stochastic network, each state receives only noisy measurement of its neighbours' states. For such networks we derive a generalised Riccati solution that quantifies and incorporates uncertainty of the forward dynamics and inverse controller in the derivation of the stochastic optimal control law. The generalised Riccati solution is derived using the Lyapunov approach. A probabilistic approximation type algorithm is employed to estimate the conditional distributions of the state and inverse controller from historical data and quantifying model uncertainties. The theoretical derivation is complemented by its validation on a set of representative examples.
Resumo:
We treat the problem of existence of a location-then-price equilibrium in the circle model with a linear quadratic type of transportation cost function which can be either convex or concave. We show the existence of a unique perfect equilibrium for the concave case when the linear and quadratic terms are equal and of a unique perfect equilibrium for the convex case when the linear term is equal to zero. Aside from these two cases, there are feasible locations by the firms for which no equilibrium in the price subgame exists. Finally, we provide a full taxonomy of the price equilibrium regions in terms of weights of the linear and quadratic terms in the cost function.
Resumo:
We generalize the Liapunov convexity theorem's version for vectorial control systems driven by linear ODEs of first-order p = 1 , in any dimension d ∈ N , by including a pointwise state-constraint. More precisely, given a x ‾ ( ⋅ ) ∈ W p , 1 ( [ a , b ] , R d ) solving the convexified p-th order differential inclusion L p x ‾ ( t ) ∈ co { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e., consider the general problem consisting in finding bang-bang solutions (i.e. L p x ˆ ( t ) ∈ { u 0 ( t ) , u 1 ( t ) , … , u m ( t ) } a.e.) under the same boundary-data, x ˆ ( k ) ( a ) = x ‾ ( k ) ( a ) & x ˆ ( k ) ( b ) = x ‾ ( k ) ( b ) ( k = 0 , 1 , … , p − 1 ); but restricted, moreover, by a pointwise state constraint of the type 〈 x ˆ ( t ) , ω 〉 ≤ 〈 x ‾ ( t ) , ω 〉 ∀ t ∈ [ a , b ] (e.g. ω = ( 1 , 0 , … , 0 ) yielding x ˆ 1 ( t ) ≤ x ‾ 1 ( t ) ). Previous results in the scalar d = 1 case were the pioneering Amar & Cellina paper (dealing with L p x ( ⋅ ) = x ′ ( ⋅ ) ), followed by Cerf & Mariconda results, who solved the general case of linear differential operators L p of order p ≥ 2 with C 0 ( [ a , b ] ) -coefficients. This paper is dedicated to: focus on the missing case p = 1 , i.e. using L p x ( ⋅ ) = x ′ ( ⋅ ) + A ( ⋅ ) x ( ⋅ ) ; generalize the dimension of x ( ⋅ ) , from the scalar case d = 1 to the vectorial d ∈ N case; weaken the coefficients, from continuous to integrable, so that A ( ⋅ ) now becomes a d × d -integrable matrix; and allow the directional vector ω to become a moving AC function ω ( ⋅ ) . Previous vectorial results had constant ω, no matrix (i.e. A ( ⋅ ) ≡ 0 ) and considered: constant control-vertices (Amar & Mariconda) and, more recently, integrable control-vertices (ourselves).
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.