41 resultados para decision making under urgency

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motor behavior may be viewed as a problem of maximizing the utility of movement outcome in the face of sensory, motor and task uncertainty. Viewed in this way, and allowing for the availability of prior knowledge in the form of a probability distribution over possible states of the world, the choice of a movement plan and strategy for motor control becomes an application of statistical decision theory. This point of view has proven successful in recent years in accounting for movement under risk, inferring the loss function used in motor tasks, and explaining motor behavior in a wide variety of circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2012 Elsevier Ltd. Motor behavior may be viewed as a problem of maximizing the utility of movement outcome in the face of sensory, motor and task uncertainty. Viewed in this way, and allowing for the availability of prior knowledge in the form of a probability distribution over possible states of the world, the choice of a movement plan and strategy for motor control becomes an application of statistical decision theory. This point of view has proven successful in recent years in accounting for movement under risk, inferring the loss function used in motor tasks, and explaining motor behavior in a wide variety of circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.