989 resultados para stochastic control
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
This paper is concerned with the cost efficiency in achieving the Swedish national air quality objectives under uncertainty. To realize an ecologically sustainable society, the parliament has approved a set of interim and long-term pollution reduction targets. However, there are considerable quantification uncertainties on the effectiveness of the proposed pollution reduction measures. In this paper, we develop a multivariate stochastic control framework to deal with the cost efficiency problem with multiple pollutants. Based on the cost and technological data collected by several national authorities, we explore the implications of alternative probabilistic constraints. It is found that a composite probabilistic constraint induces considerably lower abatement cost than separable probabilistic restrictions. The trend is reinforced by the presence of positive correlations between reductions in the multiple pollutants.
Resumo:
This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.
Resumo:
Bibliography: p. 12.
Resumo:
We consider an inversion-based neurocontroller for solving control problems of uncertain nonlinear systems. Classical approaches do not use uncertainty information in the neural network models. In this paper we show how we can exploit knowledge of this uncertainty to our advantage by developing a novel robust inverse control method. Simulations on a nonlinear uncertain second order system illustrate the approach.
Resumo:
This work introduces a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. Convergence of the output error for the proposed control method is verified by using a Lyapunov function. Several simulation examples are provided to demonstrate the efficiency of the developed control method. The manner in which such a method is extended to nonlinear multi-variable systems with different delays between the input-output pairs is considered and demonstrated through simulation examples.
Resumo:
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Kárnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained.
Resumo:
Adaptive critic methods have common roots as generalizations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, nonlinear and nonstationary environments. In this study, a novel probabilistic dual heuristic programming (DHP) based adaptive critic controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) adaptive critic method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterized by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the critic network is then calculated and shown to be equal to the analytically derived correct value.
Resumo:
2000 Mathematics Subject Classi cation: 49L60, 60J60, 93E20.
Resumo:
We introduce a technique for quantifying and then exploiting uncertainty in nonlinear stochastic control systems. The approach is suboptimal though robust and relies upon the approximation of the forward and inverse plant models by neural networks, which also estimate the intrinsic uncertainty. Sampling from the resulting Gaussian distributions of the inversion based neurocontroller allows us to introduce a control law which is demonstrably more robust than traditional adaptive controllers.
Resumo:
In this paper a new framework has been applied to the design of controllers which encompasses nonlinearity, hysteresis and arbitrary density functions of forward models and inverse controllers. Using mixture density networks, the probabilistic models of both the forward and inverse dynamics are estimated such that they are dependent on the state and the control input. The optimal control strategy is then derived which minimizes uncertainty of the closed loop system. In the absence of reliable plant models, the proposed control algorithm incorporates uncertainties in model parameters, observations, and latent processes. The local stability of the closed loop system has been established. The efficacy of the control algorithm is demonstrated on two nonlinear stochastic control examples with additive and multiplicative noise.