144 resultados para Classification, Markov chain Monte Carlo, k-nearest neighbours


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the problem of restoring a digital input signal that has been degraded by an unknown FIR filter in noise, using the Gibbs sampler. A method for drawing a random sample of a sequence of bits is presented; this is shown to have faster convergence than a scheme by Chen and Li, which draws bits independently. ©1998 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the use of Monte Carlo techniques in deterministic nonlinear optimal control. Inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot problems are studied to demonstrate the successful application of the relevant techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present algorithms for tracking and reasoning of local traits in the subsystem level based on the observed emergent behavior of multiple coordinated groups in potentially cluttered environments. Our proposed Bayesian inference schemes, which are primarily based on (Markov chain) Monte Carlo sequential methods, include: 1) an evolving network-based multiple object tracking algorithm that is capable of categorizing objects into groups, 2) a multiple cluster tracking algorithm for dealing with prohibitively large number of objects, and 3) a causality inference framework for identifying dominant agents based exclusively on their observed trajectories.We use these as building blocks for developing a unified tracking and behavioral reasoning paradigm. Both synthetic and realistic examples are provided for demonstrating the derived concepts. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study parameter estimation for time series with asymmetric α-stable innovations. The proposed methods use a Poisson sum series representation (PSSR) for the asymmetric α-stable noise to express the process in a conditionally Gaussian framework. That allows us to implement Bayesian parameter estimation using Markov chain Monte Carlo (MCMC) methods. We further enhance the series representation by introducing a novel approximation of the series residual terms in which we are able to characterise the mean and variance of the approximation. Simulations illustrate the proposed framework applied to linear time series, estimating the model parameter values and model order P for an autoregressive (AR(P)) model driven by asymmetric α-stable innovations. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present an expectation-maximisation (EM) algorithm for maximum likelihood estimation in multiple target models (MTT) with Gaussian linear state-space dynamics. We show that estimation of sufficient statistics for EM in a single Gaussian linear state-space model can be extended to the MTT case along with a Monte Carlo approximation for inference of unknown associations of targets. The stochastic approximation EM algorithm that we present here can be used along with any Monte Carlo method which has been developed for tracking in MTT models, such as Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate the performance of the algorithm with a simulation. © 2012 ISIF (Intl Society of Information Fusi).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work addresses the problem of estimating the optimal value function in a Markov Decision Process from observed state-action pairs. We adopt a Bayesian approach to inference, which allows both the model to be estimated and predictions about actions to be made in a unified framework, providing a principled approach to mimicry of a controller on the basis of observed data. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from theposterior distribution over the optimal value function. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the Gaussian process density sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a distribution defined by a density that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We describe two such MCMC methods. Both methods also allow inference of the hyperparameters of the Gaussian process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it has simple, generic code applicable to many models, 2) it has no free parameters, 3) it works well for a variety of Gaussian process based models. These properties make our method ideal for use while model building, removing the need to spend time deriving and tuning updates for more complex algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe models and algorithms for detection and tracking of group and individual targets. We develop two novel group dynamical models, within a continuous time setting, that aim to mimic behavioural properties of groups. We also describe two possible ways of modeling interactions between closely using Markov Random Field (MRF) and repulsive forces. These can be combined together with a group structure transition model to create realistic evolving group models. We use a Markov Chain Monte Carlo (MCMC)-Particles Algorithm to perform sequential inference. Computer simulations demonstrate the ability of the algorithm to detect and track targets within groups, as well as infer the correct group structure over time. ©2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of blind multiuser detection. We adopt a Bayesian approach where unknown parameters are considered random and integrated out. Computing the maximum a posteriori estimate of the input data sequence requires solving a combinatorial optimization problem. We propose here to apply the Cross-Entropy method recently introduced by Rubinstein. The performance of cross-entropy is compared to Markov chain Monte Carlo. For similar Bit Error Rate performance, we demonstrate that Cross-Entropy outperforms a generic Markov chain Monte Carlo method in terms of operation time.