81 resultados para Time-Optimal Control
Resumo:
The real-time quality control (RTQC) methods applied to Argo profiling float data by the United Kingdom (UK) Met Office, the United States (US) Fleet Numerical Meteorology and Oceanography Centre, the Australian Bureau of Meteorology and the Coriolis Centre are compared and contrasted. Data are taken from the period 2007 to 2011 inclusive and RTQC performance is assessed with respect to Argo delayed-mode quality control (DMQC). An intercomparison of RTQC techniques is performed using a common data set of profiles from 2010 and 2011. The RTQC systems are found to have similar power in identifying faulty Argo profiles but to vary widely in the number of good profiles incorrectly rejected. The efficacy of individual QC tests are inferred from the results of the intercomparison. Techniques to increase QC performance are discussed.
Resumo:
A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses dynamic integrated system optimisation and parameter estimation (DISOPE) which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A new method for approximating some Jacobian trajectories required by the algorithm is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch chemical processes.
Resumo:
This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.
Resumo:
The relationship between minimum variance and minimum expected quadratic loss feedback controllers for linear univariate discrete-time stochastic systems is reviewed by taking the approach used by Caines. It is shown how the two methods can be regarded as providing identical control actions as long as a noise-free measurement state-space model is employed.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
In this paper, a discrete time dynamic integrated system optimisation and parameter estimation algorithm is applied to the solution of the nonlinear tracking optimal control problem. A version of the algorithm with a linear-quadratic model-based problem is developed and implemented in software. The algorithm implemented is tested with simulation examples.
Resumo:
This paper considers left-invariant control systems defined on the Lie groups SU(2) and SO(3). Such systems have a number of applications in both classical and quantum control problems. The purpose of this paper is two-fold. Firstly, the optimal control problem for a system varying on these Lie Groups, with cost that is quadratic in control is lifted to their Hamiltonian vector fields through the Maximum principle of optimal control and explicitly solved. Secondly, the control systems are integrated down to the level of the group to give the solutions for the optimal paths corresponding to the optimal controls. In addition it is shown here that integrating these equations on the Lie algebra su(2) gives simpler solutions than when these are integrated on the Lie algebra so(3).
Resumo:
A simple parameter adaptive controller design methodology is introduced in which steady-state servo tracking properties provide the major control objective. This is achieved without cancellation of process zeros and hence the underlying design can be applied to non-minimum phase systems. As with other self-tuning algorithms, the design (user specified) polynomials of the proposed algorithm define the performance capabilities of the resulting controller. However, with the appropriate definition of these polynomials, the synthesis technique can be shown to admit different adaptive control strategies, e.g. self-tuning PID and self-tuning pole-placement controllers. The algorithm can therefore be thought of as an embodiment of other self-tuning design techniques. The performances of some of the resulting controllers are illustrated using simulation examples and the on-line application to an experimental apparatus.
Resumo:
This paper considers the use of a discrete-time deadbeat control action on systems affected by noise. Variations on the standard controller form are discussed and comparisons are made with controllers in which noise rejection is a higher priority objective. Both load and random disturbances are considered in the system description, although the aim of the deadbeat design remains as a tailoring of reference input variations. Finally, the use of such a deadbeat action within a self-tuning control framework is shown to satisfy, under certain conditions, the self-tuning property, generally though only when an extended form of least-squares estimation is incorporated.
Resumo:
A self-tuning controller which automatically assigns weightings to control and set-point following is introduced. This discrete-time single-input single-output controller is based on a generalized minimum-variance control strategy. The automatic on-line selection of weightings is very convenient, especially when the system parameters are unknown or slowly varying with respect to time, which is generally considered to be the type of systems for which self-tuning control is useful. This feature also enables the controller to overcome difficulties with non-minimum phase systems.
Resumo:
In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.
Resumo:
A novel optimising controller is designed that leads a slow process from a sub-optimal operational condition to the steady-state optimum in a continuous way based on dynamic information. Using standard results from optimisation theory and discrete optimal control, the solution of a steady-state optimisation problem is achieved by solving a receding-horizon optimal control problem which uses derivative and state information from the plant via a shadow model and a state-space identifier. The paper analyzes the steady-state optimality of the procedure, develops algorithms with and without control rate constraints and applies the procedure to a high fidelity simulation study of a distillation column optimisation.