941 resultados para STOCHASTIC OPTIMAL CONTROL


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a real-time optimal control technique for non-linear plants is proposed. The control system makes use of the cell-mapping (CM) techniques, widely used for the global analysis of highly non-linear systems. The CM framework is employed for designing approximate optimal controllers via a control variable discretization. Furthermore, CM-based designs can be improved by the use of supervised feedforward artificial neural networks (ANNs), which have proved to be universal and efficient tools for function approximation, providing also very fast responses. The quantitative nature of the approximate CM solutions fits very well with ANNs characteristics. Here, we propose several control architectures which combine, in a different manner, supervised neural networks and CM control algorithms. On the one hand, different CM control laws computed for various target objectives can be employed for training a neural network, explicitly including the target information in the input vectors. This way, tracking problems, in addition to regulation ones, can be addressed in a fast and unified manner, obtaining smooth, averaged and global feedback control laws. On the other hand, adjoining CM and ANNs are also combined into a hybrid architecture to address problems where accuracy and real-time response are critical. Finally, some optimal control problems are solved with the proposed CM, neural and hybrid techniques, illustrating their good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of this work is to develop a numerical platform for the turbulence modeling and optimal control of liquid metal flows. Thanks to their interesting thermal properties, liquid metals are widely studied as coolants for heat transfer applications in the nuclear context. However, due to their low Prandtl numbers, the standard turbulence models commonly used for coolants as air or water are inadequate. Advanced turbulence models able to capture the anisotropy in the flow and heat transfer are then necessary. In this thesis, a new anisotropic four-parameter turbulence model is presented and validated. The proposed model is based on explicit algebraic models and solves four additional transport equations for dynamical and thermal turbulent variables. For the validation of the model, several flow configurations are considered for different Reynolds and Prandtl numbers, namely fully developed flows in a plane channel and cylindrical pipe, and forced and mixed convection in a backward-facing step geometry. Since buoyancy effects cannot be neglected in liquid metals-cooled fast reactors, the second aim of this work is to provide mathematical and numerical tools for the simulation and optimization of liquid metals in mixed and natural convection. Optimal control problems for turbulent buoyant flows are studied and analyzed with the Lagrange multipliers method. Numerical algorithms for optimal control problems are integrated into the numerical platform and several simulations are performed to show the robustness, consistency, and feasibility of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro è suddiviso in due parti. Nella prima sono presentate la teoria degli esponenti di Lyapunov e la teoria del Controllo Ottimo da un punto di vista geometrico. Sono riportati i risultati principali di queste due teorie e vengono abbozzate le dimostrazioni dei teoremi più importanti. Nella seconda parte, usando queste due teorie, abbiamo provato a trovare una stima per gli esponenti di Lyapunov estremali associati ai sistemi dinamici lineari switched sul gruppo di Lie SL2(R). Abbiamo preso in considerazione solo il caso di un sistema generato da due matrici A,B ∈ sl2(R) che generano l’intera algebra di Lie. Abbiamo suddiviso il problema in alcuni possibili casi a seconda della posizione nello spazio tridimensionale sl2(R) del segmento di estremi A e B rispetto al cono delle matrici nilpotenti. Per ognuno di questi casi, abbiamo trovato una candidata soluzione ottimale. Riformuleremo il problema originale di trovare una stima per gli esponenti di Lyapunov in un problema di Controllo Ottimo. Dopodiché, applichiamo il Principio del massimo di Pontryagin e troveremo un controllo e la corrispondente traiettoria che soddisfa tale Principio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, autonomous aerial vehicles gained large popularity in a variety of applications in the field of automation. To accomplish various and challenging tasks the capability of generating trajectories has assumed a key role. As higher performances are sought, traditional, flatness-based trajectory generation schemes present their limitations. In these approaches the highly nonlinear dynamics of the quadrotor is, indeed, neglected. Therefore, strategies based on optimal control principles turn out to be beneficial, since in the trajectory generation process they allow the control unit to best exploit the actual dynamics, and enable the drone to perform quite aggressive maneuvers. This dissertation is then concerned with the development of an optimal control technique to generate trajectories for autonomous drones. The algorithm adopted to this end is a second-order iterative method working directly in continuous-time, which, under proper initialization, guarantees quadratic convergence to a locally optimal trajectory. At each iteration a quadratic approximation of the cost functional is minimized and a decreasing direction is then obtained as a linear-affine control law, after solving a differential Riccati equation. The algorithm has been implemented and its effectiveness has been tested on the vectored-thrust dynamical model of a quadrotor in a realistic simulative setup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump–diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption– investment problem for a jump–diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we review some earlier distributed algorithms developed by the authors and collaborators, which are based on two different approaches, namely, distributed moment estimation and distributed stochastic approximations. We show applications of these algorithms on image compression, linear classification and stochastic optimal control. In all cases, the benefit of cooperation is clear: even when the nodes have access to small portions of the data, by exchanging their estimates, they achieve the same performance as that of a centralized architecture, which would gather all the data from all the nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 49L20, 60J60, 93E20

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work revolves around potential theory in metric spaces, focusing on applications of dyadic potential theory to general problems associated to functional analysis and harmonic analysis. In the first part of this work we consider the weighted dual dyadic Hardy's inequality over dyadic trees and we use the Bellman function method to characterize the weights for which the inequality holds, and find the optimal constant for which our statement holds. We also show that our Bellman function is the solution to a stochastic optimal control problem. In the second part of this work we consider the problem of quasi-additivity formulas for the Riesz capacity in metric spaces and we prove formulas of quasi-additivity in the setting of the tree boundaries and in the setting of Ahlfors-regular spaces. We also consider a proper Harmonic extension to one additional variable of Riesz potentials of functions on a compact Ahlfors-regular space and we use our quasi-additivity formula to prove a result of tangential convergence of the harmonic extension of the Riesz potential up to an exceptional set of null measure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This note investigates the motion control of an autonomous underwater vehicle (AUV). The AUV is modeled as a nonholonomic system as any lateral motion of a conventional, slender AUV is quickly damped out. The problem is formulated as an optimal kinematic control problem on the Euclidean Group of Motions SE(3), where the cost function to be minimized is equal to the integral of a quadratic function of the velocity components. An application of the Maximum Principle to this optimal control problem yields the appropriate Hamiltonian and the corresponding vector fields give the necessary conditions for optimality. For a special case of the cost function, the necessary conditions for optimality can be characterized more easily and we proceed to investigate its solutions. Finally, it is shown that a particular set of optimal motions trace helical paths. Throughout this note we highlight a particular case where the quadratic cost function is weighted in such a way that it equates to the Lagrangian (kinetic energy) of the AUV. For this case, the regular extremal curves are constrained to equate to the AUV's components of momentum and the resulting vector fields are the d'Alembert-Lagrange equations in Hamiltonian form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between minimum variance and minimum expected quadratic loss feedback controllers for linear univariate discrete-time stochastic systems is reviewed by taking the approach used by Caines. It is shown how the two methods can be regarded as providing identical control actions as long as a noise-free measurement state-space model is employed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.