6 resultados para stochastic PDE
em CaltechTHESIS
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:
i) the mean exit time
ii) the phase-space distribution of exit locations.
When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.
Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.
The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.
Resumo:
A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.
It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.
This theory is then applied to the problem of a vibrating string with stochastic density.
Resumo:
Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.
For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.
For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.
For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.
Resumo:
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.
The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.
The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.
First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.
In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.
The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.
Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
Resumo:
H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes. However, the differentiation includes the first and second functional derivatives and, except for a restricted set of systems, is too complex to solve with present techniques.
This investigation studies the optimal control law for the open loop system and incorporates it in a sub-optimal feedback control law. This suboptimal control law's performance is at least as good as that of the optimal control function and satisfies a differential equation involving only the first functional derivative. The solution of this equation is equivalent to solving two two-point boundary valued integro-partial differential equations. An approximate solution has advantages over the conventional approximate solution of Kushner's equation.
As a result of this study, well known results of deterministic optimal control are deduced from the analysis of optimal open loop control.