942 resultados para State Extension Problem
Resumo:
Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.
Resumo:
Cover title.
Resumo:
"December 1, 1983."
Resumo:
Dec. 1960 incorrectly marked v. 22; v. 22 omitted in vol. enumeration.
Resumo:
We discuss the problem of determining whether the state of several quantum mechanical subsystems is entangled. As in previous work on two subsystems we introduce a procedure for checking separability that is based on finding state extensions with appropriate properties and may be implemented as a semidefinite program. The main result of this work is to show that there is a series of tests of this kind such that if a multiparty state is entangled this will eventually be detected by one of the tests. The procedure also provides a means of constructing entanglement witnesses that could in principle be measured in order to demonstrate that the state is entangled.
Resumo:
In the MPC literature, stability is usually assured under the assumption that the state is measured. Since the closed-loop system may be nonlinear because of the constraints, it is not possible to apply the separation principle to prove global stability for the Output feedback case. It is well known that, a nonlinear closed-loop system with the state estimated via an exponentially converging observer combined with a state feedback controller can be unstable even when the controller is stable. One alternative to overcome the state estimation problem is to adopt a non-minimal state space model, in which the states are represented by measured past inputs and outputs [P.C. Young, M.A. Behzadi, C.L. Wang, A. Chotai, Direct digital and adaptative control by input-output, state variable feedback pole assignment, International journal of Control 46 (1987) 1867-1881; C. Wang, P.C. Young, Direct digital control by input-output, state variable feedback: theoretical background, International journal of Control 47 (1988) 97-109]. In this case, no observer is needed since the state variables can be directly measured. However, an important disadvantage of this approach is that the realigned model is not of minimal order, which makes the infinite horizon approach to obtain nominal stability difficult to apply. Here, we propose a method to properly formulate an infinite horizon MPC based on the output-realigned model, which avoids the use of an observer and guarantees the closed loop stability. The simulation results show that, besides providing closed-loop stability for systems with integrating and stable modes, the proposed controller may have a better performance than those MPC controllers that make use of an observer to estimate the current states. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
A novel optimising controller is designed that leads a slow process from a sub-optimal operational condition to the steady-state optimum in a continuous way based on dynamic information. Using standard results from optimisation theory and discrete optimal control, the solution of a steady-state optimisation problem is achieved by solving a receding-horizon optimal control problem which uses derivative and state information from the plant via a shadow model and a state-space identifier. The paper analyzes the steady-state optimality of the procedure, develops algorithms with and without control rate constraints and applies the procedure to a high fidelity simulation study of a distillation column optimisation.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
The problem of parameterizing approximately algebraic curves and surfaces is an active research field, with many implications in practical applications. The problem can be treated locally or globally. We formally state the problem, in its global version for the case of algebraic curves (planar or spatial), and we report on some algorithms approaching it, as well as on the associated error distance analysis.
Resumo:
Vol. 1. Hearings, January 3 to February 21, 1919.- -v. 2. Appendix. Hearings before the Joint Subcommittee on Interstate and Foreign Commerce, November 20, 1916 to May 9, 1917.-- v. 3. Appendix. Hearing before the Joint Subcommittee on Interstate and Foreign Commerce, November 1, 1917 to December 19, 1917.
Resumo:
Poster presented in The 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Presented at SEMINAR "ACTION TEMPS RÉEL:INFRASTRUCTURES ET SERVICES SYSTÉMES". 10, Apr, 2015. Brussels, Belgium.
Resumo:
Towards an operative analysis of public policies: An approach focused on actors, resources and institutions. This article develops an analytical model which is centred on the individual and collective behaviour of actors involved during different stages of public policy. We postulate that the content and institutional characteristics of public action (dependent variable) are the result of interactions between political-administrative authorities, on the one hand, and, on the other, social groups which cause or suffer the negative effects of a collective problem which public action attempts to resolve (independent variables). The 'game' of the actors depends not only on their particular interests, but also on their resources (money, time, consensus, organization, rights, infrastructure, information, personnel, strength, political support) which they are able to exploit to defend their positions, as well as on the institutional rules which frame these policy games.