1000 resultados para Heading Estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a technique for estimating the 6DOF pose of a PTZ camera by tracking a single moving target in the image with known 3D position. This is useful in situations where it is not practical to measure the camera pose directly. Our application domain is estimating the pose of a PTZ camerso so that it can be used for automated GPS-based tracking and filming of UAV flight trials. We present results which show the technique is able to localize a PTZ after a short vision-tracked flight, and that the estimated pose is sufficiently accurate for the PTZ to then actively track a UAV based on GPS position data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the ocean science community, researchers have begun employing novel sensor platforms as integral pieces in oceanographic data collection, which have significantly advanced the study and prediction of complex and dynamic ocean phenomena. These innovative tools are able to provide scientists with data at unprecedented spatiotemporal resolutions. This paper focuses on the newly developed Wave Glider platform from Liquid Robotics. This vehicle produces forward motion by harvesting abundant natural energy from ocean waves, and provides a persistent ocean presence for detailed ocean observation. This study is targeted at determining a kinematic model for offline planning that provides an accurate estimation of the vehicle speed for a desired heading and set of environmental parameters. Given the significant wave height, ocean surface and subsurface currents, wind speed and direction, we present the formulation of a system identification to provide the vehicle’s speed over a range of possible directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimates of the half-life to convergence of prices across a panel of cities are subject to bias from three potential sources: inappropriate cross-sectional aggregation of heterogeneous coefficients, presence of lagged dependent variables in a model with individual fixed effects, and time aggregation of commodity prices. This paper finds no evidence of heterogeneity bias in annual CPI data for 17 U.S. cities from 1918 to 2006, but correcting for the “Nickell bias” and time aggregation bias produces a half-life of 7.5 years, shorter than estimates from previous studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inverse problems based on using experimental data to estimate unknown parameters of a system often arise in biological and chaotic systems. In this paper, we consider parameter estimation in systems biology involving linear and non-linear complex dynamical models, including the Michaelis–Menten enzyme kinetic system, a dynamical model of competence induction in Bacillus subtilis bacteria and a model of feedback bypass in B. subtilis bacteria. We propose some novel techniques for inverse problems. Firstly, we establish an approximation of a non-linear differential algebraic equation that corresponds to the given biological systems. Secondly, we use the Picard contraction mapping, collage methods and numerical integration techniques to convert the parameter estimation into a minimization problem of the parameters. We propose two optimization techniques: a grid approximation method and a modified hybrid Nelder–Mead simplex search and particle swarm optimization (MH-NMSS-PSO) for non-linear parameter estimation. The two techniques are used for parameter estimation in a model of competence induction in B. subtilis bacteria with noisy data. The MH-NMSS-PSO scheme is applied to a dynamical model of competence induction in B. subtilis bacteria based on experimental data and the model for feedback bypass. Numerical results demonstrate the effectiveness of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The action potential (ap) of a cardiac cell is made up of a complex balance of ionic currents which flow across the cell membrane in response to electrical excitation of the cell. Biophysically detailed mathematical models of the ap have grown larger in terms of the variables and parameters required to model new findings in subcellular ionic mechanisms. The fitting of parameters to such models has seen a large degree of parameter and module re-use from earlier models. An alternative method for modelling electrically exciteable cardiac tissue is a phenomenological model, which reconstructs tissue level ap wave behaviour without subcellular details. A new parameter estimation technique to fit the morphology of the ap in a four variable phenomenological model is presented. An approximation of a nonlinear ordinary differential equation model is established that corresponds to the given phenomenological model of the cardiac ap. The parameter estimation problem is converted into a minimisation problem for the unknown parameters. A modified hybrid Nelder–Mead simplex search and particle swarm optimization is then used to solve the minimisation problem for the unknown parameters. The successful fitting of data generated from a well known biophysically detailed model is demonstrated. A successful fit to an experimental ap recording that contains both noise and experimental artefacts is also produced. The parameter estimation method’s ability to fit a complex morphology to a model with substantially more parameters than previously used is established.