97 resultados para Subspace Filter Diagonalization
em Cambridge University Engineering Department Publications Database
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Optimal Bayesian multi-target filtering is in general computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency was proposed by Whiteley et. al. Numerical examples were presented for two scenarios, including a challenging nonlinear observation model, to support the claim. This paper studies the theoretical properties of this auxiliary particle implementation. $\mathbb{L}_p$ error bounds are established from which almost sure convergence follows.
Resumo:
Optimal Bayesian multi-target filtering is, in general, computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), we present a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency. Numerical examples are presented for two scenarios, including a challenging nonlinear observation model.