129 resultados para linear quadratic Gaussian control
em Cambridge University Engineering Department Publications Database
Resumo:
Networked control systems (NCSs) have attracted much attention in the past decade due to their many advantages and growing number of applications. Different than classic control systems, resources in NCSs, such as network bandwidth and communication energy, are often limited, which degrade the closed-loop system performance and may even cause the system to become unstable. Seeking a desired trade-off between the closed-loop system performance and the limited resources is thus one heated area of research. In this paper, we analyze the trade-off between the sensor-to-controller communication rate and the closed-loop system performance indexed by the conventional LQG control cost. We present and compare several sensor data schedules, and demonstrate that two event-based sensor data schedules provide better trade-off than an optimal offline schedule. Simulation examples are provided to illustrate the theories developed in the paper. © 2012 AACC American Automatic Control Council).
Resumo:
This paper is concerned with the modelling of strategic interactions between the human driver and the vehicle active front steering (AFS) controller in a path-following task where the two controllers hold different target paths. The work is aimed at extending the use of mathematical models in representing driver steering behaviour in complicated driving situations. Two game theoretic approaches, namely linear quadratic game and non-cooperative model predictive control (non-cooperative MPC), are used for developing the driver-AFS interactive steering control model. For each approach, the open-loop Nash steering control solution is derived; the influences of the path-following weights, preview and control horizons, driver time delay and arm neuromuscular system (NMS) dynamics are investigated, and the CPU time consumed is recorded. It is found that the two approaches give identical time histories as well as control gains, while the non-cooperative MPC method uses much less CPU time. Specifically, it is observed that the introduction of weight on the integral of vehicle lateral displacement error helps to eliminate the steady-state path-following error; the increase in preview horizon and NMS natural frequency and the decline in time delay and NMS damping ratio improve the path-following accuracy. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.
Resumo:
This paper explores the use of Monte Carlo techniques in deterministic nonlinear optimal control. Inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot problems are studied to demonstrate the successful application of the relevant techniques.
Resumo:
The contribution described in this paper is an algorithm for learning nonlinear, reference tracking, control policies given no prior knowledge of the dynamical system and limited interaction with the system through the learning process. Concepts from the field of reinforcement learning, Bayesian statistics and classical control have been brought together in the formulation of this algorithm which can be viewed as a form of indirect self tuning regulator. On the task of reference tracking using a simulated inverted pendulum it was shown to yield generally improved performance on the best controller derived from the standard linear quadratic method using only 30 s of total interaction with the system. Finally, the algorithm was shown to work on the simulated double pendulum proving its ability to solve nontrivial control tasks. © 2011 IEEE.
Resumo:
An online scheduling of the parameter ensuring in addition to closed loop stability was presented. Attention was given to saturated linear low-gain control laws. Null controllability of the considered linear systems was assumed. The family of low gain control laws achieved semiglobal stabilization.
Resumo:
The solution time of the online optimization problems inherent to Model Predictive Control (MPC) can become a critical limitation when working in embedded systems. One proposed approach to reduce the solution time is to split the optimization problem into a number of reduced order problems, solve such reduced order problems in parallel and selecting the solution which minimises a global cost function. This approach is known as Parallel MPC. The potential capabilities of disturbance rejection are introduced using a simulation example. The algorithm is implemented in a linearised model of a Boeing 747-200 under nominal flight conditions and with an induced wind disturbance. Under significant output disturbances Parallel MPC provides a significant improvement in performance when compared to Multiplexed MPC (MMPC) and Linear Quadratic Synchronous MPC (SMPC). © 2013 IEEE.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Recent developments in modeling driver steering control with preview are reviewed. While some validation with experimental data has been presented, the rigorous application of formal system identification methods has not yet been attempted. This paper describes a steering controller based on linear model-predictive control. An indirect identification method that minimizes steering angle prediction error is developed. Special attention is given to filtering the prediction error so as to avoid identification bias that arises from the closed-loop operation of the driver-vehicle system. The identification procedure is applied to data collected from 14 test drivers performing double lane change maneuvers in an instrumented vehicle. It is found that the identification procedure successfully finds parameter values for the model that give small prediction errors. The procedure is also able to distinguish between the different steering strategies adopted by the test drivers. © 2006 IEEE.
Resumo:
A method is proposed for on-line reconfiguration of the terminal constraint used to provide theoretical nominal stability guarantees in linear model predictive control (MPC). By parameterising the terminal constraint, its complete reconstruction is avoided when input constraints are modified to accommodate faults. To enlarge the region of feasibility of the terminal control law for a certain class of input faults with redundantly actuated plants, the linear terminal controller is defined in terms of virtual commands. A suitable terminal cost weighting for the reconfigurable MPC is obtained by means of an upper bound on the cost for all feasible realisations of the virtual commands from the terminal controller. Conditions are proposed that guarantee feasibility recovery for a defined subset of faults. The proposed method is demonstrated by means of a numerical example. © 2013 Elsevier B.V. All rights reserved.
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.