78 resultados para State Space Analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Standard algorithms in tracking and other state-space models assume identical and synchronous sampling rates for the state and measurement processes. However, real trajectories of objects are typically characterized by prolonged smooth sections, with sharp, but infrequent, changes. Thus, a more parsimonious representation of a target trajectory may be obtained by direct modeling of maneuver times in the state process, independently from the observation times. This is achieved by assuming the state arrival times to follow a random process, typically specified as Markovian, so that state points may be allocated along the trajectory according to the degree of variation observed. The resulting variable dimension state inference problem is solved by developing an efficient variable rate particle filtering algorithm to recursively update the posterior distribution of the state sequence as new data becomes available. The methodology is quite general and can be applied across many models where dynamic model uncertainty occurs on-line. Specific models are proposed for the dynamics of a moving object under internal forcing, expressed in terms of the intrinsic dynamics of the object. The performance of the algorithms with these dynamical models is demonstrated on several challenging maneuvering target tracking problems in clutter. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop methods for performing filtering and smoothing in non-linear non-Gaussian dynamical models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. In particular, novel techniques are presented for generation of random realisations from the joint smoothing distribution and for MAP estimation of the state sequence. Realisations of the smoothing distribution are generated in a forward-backward procedure, while the MAP estimation procedure can be performed in a single forward pass of the Viterbi algorithm applied to a discretised version of the state space. An application to spectral estimation for time-varying autoregressions is described.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose a new SMC algorithm to compute the expectation of additive functionals recursively. Essentially, it is an on-line or "forward only" implementation of a forward filtering backward smoothing SMC algorithm proposed by Doucet, Godsill and Andrieu (2000). Compared to the standard \emph{path space} SMC estimator whose asymptotic variance increases quadratically with time even under favorable mixing assumptions, the non asymptotic variance of the proposed SMC estimator only increases linearly with time. We show how this allows us to perform recursive parameter estimation using an SMC implementation of an on-line version of the Expectation-Maximization algorithm which does not suffer from the particle path degeneracy problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We design a particle interpretation of Feynman-Kac measures on path spaces based on a backward Markovian representation combined with a traditional mean field particle interpretation of the flow of their final time marginals. In contrast to traditional genealogical tree based models, these new particle algorithms can be used to compute normalized additive functionals "on-the-fly" as well as their limiting occupation measures with a given precision degree that does not depend on the final time horizon. We provide uniform convergence results with respect to the time horizon parameter as well as functional central limit theorems and exponential concentration estimates. Our results have important consequences for online parameter estimation for non-linear non-Gaussian state-space models. We show how the forward filtering backward smoothing estimates of additive functionals can be computed using a forward only recursion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we present a new, compact derivation of state-space formulae for the so-called discretisation-based solution of the H∞ sampled-data control problem. Our approach is based on the established technique of continuous time-lifting, which is used to isometrically map the continuous-time, linear, periodically time-varying, sampled-data problem to a discretetime, linear, time-invariant problem. State-space formulae are derived for the equivalent, discrete-time problem by solving a set of two-point, boundary-value problems. The formulae accommodate a direct feed-through term from the disturbance inputs to the controlled outputs of the original plant and are simple, requiring the computation of only a single matrix exponential. It is also shown that the resultant formulae can be easily re-structured to give a numerically robust algorithm for computing the state-space matrices. © 1997 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forwardbackward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and the measurement noise. We find that the method is intrinsically very powerful, simple and stable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motor task variation has been shown to be a key ingredient in skill transfer, retention, and structural learning. However, many studies only compare training of randomly varying tasks to either blocked or null training, and it is not clear how experiencing different nonrandom temporal orderings of tasks might affect the learning process. Here we study learning in human subjects who experience the same set of visuomotor rotations, evenly spaced between -60° and +60°, either in a random order or in an order in which the rotation angle changed gradually. We compared subsequent learning of three test blocks of +30°→-30°→+30° rotations. The groups that underwent either random or gradual training showed significant (P < 0.01) facilitation of learning in the test blocks compared with a control group who had not experienced any visuomotor rotations before. We also found that movement initiation times in the random group during the test blocks were significantly (P < 0.05) lower than for the gradual or the control group. When we fit a state-space model with fast and slow learning processes to our data, we found that the differences in performance in the test block were consistent with the gradual or random task variation changing the learning and retention rates of only the fast learning process. Such adaptation of learning rates may be a key feature of ongoing meta-learning processes. Our results therefore suggest that both gradual and random task variation can induce meta-learning and that random learning has an advantage in terms of shorter initiation times, suggesting less reliance on cognitive processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several authors have proposed algorithms for approximate explicit MPC [1],[2],[3]. These algorithms have in common that they develop a stability criterion for approximate explicit MPC that require the approximate cost function to be within a certain distance from the optimal cost function. In this paper, stability is instead ascertained by considering only the cost function of the approximate MPC. If a region of the state space is found where the cost function is not decreasing, this indicates that an improved approximation (to the optimal control) is required in that region. If the approximate cost function is decreasing everywhere, no further refinement of the approximate MPC is necessary, since stability is guaranteed. ©2009 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization. © 2012 Kadiallah et al.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present an expectation-maximisation (EM) algorithm for maximum likelihood estimation in multiple target models (MTT) with Gaussian linear state-space dynamics. We show that estimation of sufficient statistics for EM in a single Gaussian linear state-space model can be extended to the MTT case along with a Monte Carlo approximation for inference of unknown associations of targets. The stochastic approximation EM algorithm that we present here can be used along with any Monte Carlo method which has been developed for tracking in MTT models, such as Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate the performance of the algorithm with a simulation. © 2012 ISIF (Intl Society of Information Fusi).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerically well-conditioned state-space realisations for all-pass systems, such as Padé approximations to exp(-s), are derived that can be computed using exact integer arithmetic. This is then applied to the a series of functions of exp(-s). It is also shown that the H-infinity norm of the transfer function from the input to the state of a balanced realisation of the Padé approximation of exp(-s) is unity. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gaussian processes are gaining increasing popularity among the control community, in particular for the modelling of discrete time state space systems. However, it has not been clear how to incorporate model information, in the form of known state relationships, when using a Gaussian process as a predictive model. An obvious example of known prior information is position and velocity related states. Incorporation of such information would be beneficial both computationally and for faster dynamics learning. This paper introduces a method of achieving this, yielding faster dynamics learning and a reduction in computational effort from O(Dn2) to O((D - F)n2) in the prediction stage for a system with D states, F known state relationships and n observations. The effectiveness of the method is demonstrated through its inclusion in the PILCO learning algorithm with application to the swing-up and balance of a torque-limited pendulum and the balancing of a robotic unicycle in simulation. © 2012 IEEE.