46 resultados para Controlled stochastic differential equation, Infinite-dimensional stochastic differential equation, Quadratic optimal control
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.
Resumo:
We consider the general problem of constructing nonparametric Bayesian models on infinite-dimensional random objects, such as functions, infinite graphs or infinite permutations. The problem has generated much interest in machine learning, where it is treated heuristically, but has not been studied in full generality in non-parametric Bayesian statistics, which tends to focus on models over probability distributions. Our approach applies a standard tool of stochastic process theory, the construction of stochastic processes from their finite-dimensional marginal distributions. The main contribution of the paper is a generalization of the classic Kolmogorov extension theorem to conditional probabilities. This extension allows a rigorous construction of nonparametric Bayesian models from systems of finite-dimensional, parametric Bayes equations. Using this approach, we show (i) how existence of a conjugate posterior for the nonparametric model can be guaranteed by choosing conjugate finite-dimensional models in the construction, (ii) how the mapping to the posterior parameters of the nonparametric model can be explicitly determined, and (iii) that the construction of conjugate models in essence requires the finite-dimensional models to be in the exponential family. As an application of our constructive framework, we derive a model on infinite permutations, the nonparametric Bayesian analogue of a model recently proposed for the analysis of rank data.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
A pivotal problem in Bayesian nonparametrics is the construction of prior distributions on the space M(V) of probability measures on a given domain V. In principle, such distributions on the infinite-dimensional space M(V) can be constructed from their finite-dimensional marginals---the most prominent example being the construction of the Dirichlet process from finite-dimensional Dirichlet distributions. This approach is both intuitive and applicable to the construction of arbitrary distributions on M(V), but also hamstrung by a number of technical difficulties. We show how these difficulties can be resolved if the domain V is a Polish topological space, and give a representation theorem directly applicable to the construction of any probability distribution on M(V) whose first moment measure is well-defined. The proof draws on a projective limit theorem of Bochner, and on properties of set functions on Polish spaces to establish countable additivity of the resulting random probabilities.
Resumo:
A type of adaptive, closed-loop controllers known as self-tuning regulators present a robust method of eliminating thermoacoustic oscillations in modern gas turbines. These controllers are able to adapt to changes in operating conditions, and require very little pre-characterisation of the system. One piece of information that is required, however, is the sign of the system's high frequency gain (or its 'instantaneous gain'). This poses a problem: combustion systems are infinite-dimensional, and so this information is never known a priori. A possible solution is to use a Nussbaum gain, which guarantees closed-loop stability without knowledge of the sign of the high frequency gain. Despite the theory for such a controller having been developed in the 1980s, it has never, to the authors' knowledge, been demonstrated experimentally. In this paper, a Nussbaum gain is used to stabilise thermoacoustic instability in a Rijke tube. The sign of the high frequency gain of the system is not required, and the controller is robust to large changes in operating conditions - demonstrated by varying the length of the Rijke tube with time. Copyright © 2008 by Simon J. Illingworth & Aimee S. Morgans.
Resumo:
The ability of hydrodynamically self-excited jets to lock into strong external forcing is well known. Their dynamics before lock-in and the specific bifurcations through which they lock in, however, are less well known. In this experimental study, we acoustically force a low-density jet around its natural global frequency. We examine its response leading up to lock-in and compare this to that of a forced van der Pol oscillator. We find that, when forced at increasing amplitudes, the jet undergoes a sequence of two nonlinear transitions: (i) from periodicity to T{double-struck}2 quasiperiodicity via a torus-birth bifurcation; and then (ii) from T{double-struck}2 quasiperiodicity to 1:1 lock-in via either a saddle-node bifurcation with frequency pulling, if the forcing and natural frequencies are close together, or a torus-death bifurcation without frequency pulling, but with a gradual suppression of the natural mode, if the two frequencies are far apart. We also find that the jet locks in most readily when forced close to its natural frequency, but that the details contain two asymmetries: the jet (i) locks in more readily and (ii) oscillates more strongly when it is forced below its natural frequency than when it is forced above it. Except for the second asymmetry, all of these transitions, bifurcations and dynamics are accurately reproduced by the forced van der Pol oscillator. This shows that this complex (infinite-dimensional) forced self-excited jet can be modelled reasonably well as a simple (three-dimensional) forced self-excited oscillator. This result adds to the growing evidence that open self-excited flows behave essentially like low-dimensional nonlinear dynamical systems. It also strengthens the universality of such flows, raising the possibility that more of them, including some industrially relevant flames, can be similarly modelled. © 2013 Cambridge University Press.
Resumo:
This paper explores the use of Monte Carlo techniques in deterministic nonlinear optimal control. Inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot problems are studied to demonstrate the successful application of the relevant techniques.
Resumo:
On a daily basis, humans interact with a vast range of objects and tools. A class of tasks, which can pose a serious challenge to our motor skills, are those that involve manipulating objects with internal degrees of freedom, such as when folding laundry or using a lasso. Here, we use the framework of optimal feedback control to make predictions of how humans should interact with such objects. We confirm the predictions experimentally in a two-dimensional object manipulation task, in which subjects learned to control six different objects with complex dynamics. We show that the non-intuitive behavior observed when controlling objects with internal degrees of freedom can be accounted for by a simple cost function representing a trade-off between effort and accuracy. In addition to using a simple linear, point-mass optimal control model, we also used an optimal control model, which considers the non-linear dynamics of the human arm. We find that the more realistic optimal control model captures aspects of the data that cannot be accounted for by the linear model or other previous theories of motor control. The results suggest that our everyday interactions with objects can be understood by optimality principles and advocate the use of more realistic optimal control models for the study of human motor neuroscience.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.
Resumo:
A new experimental articulated vehicle with computer-controlled suspensions is used to investigate the benefits of active roll control for heavy vehicles. The mechanical hardware, the instrumentation, and the distributed control architecture are detailed. A simple roll-plane model is developed and validated against experimental data, and used to design a controller based on lateral acceleration feedback. The controller is implemented and tested on the experimental vehicle. By tilting both the tractor drive axle and the trailer inwards, substantial reductions in normalized lateral load transfer are obtained, both in steady state and transient conditions. Power requirements are also considered. © IMechE 2005.
Resumo:
The purpose of this paper is to highlight the central role that the time asymmetry of stability plays in feedback control. We show that this provides a new perspective on the use of doubly-infinite or semi-infinite time axes for signal spaces in control theory. We then focus on the implication of this time asymmetry in modeling uncertainty, regulation and robust control. We point out that modeling uncertainty and the ease of control depend critically on the direction of time. We finally discuss the relationship of this control-based time arrow with the well-known arrows of time in physics. © 2008 IEEE.
Resumo:
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.
Resumo:
Zeno behavior is a dynamic phenomenon unique to hybrid systems in which an infinite number of discrete transitions occurs in a finite amount of time. This behavior commonly arises in mechanical systems undergoing impacts and optimal control problems, but its characterization for general hybrid systems is not completely understood. The goal of this paper is to develop a stability theory for Zeno hybrid systems that parallels classical Lyapunov theory; that is, we present Lyapunov-like sufficient conditions for Zeno behavior obtained by mapping solutions of complex hybrid systems to solutions of simpler Zeno hybrid systems defined on the first quadrant of the plane. These conditions are applied to Lagrangian hybrid systems, which model mechanical systems undergoing impacts, yielding simple sufficient conditions for Zeno behavior. Finally, the results are applied to robotic bipedal walking. © 2012 IEEE.