195 resultados para Statistical distributions
Resumo:
Condition-based maintenance is concerned with the collection and interpretation of data to support maintenance decisions. The non-intrusive nature of vibration data enables the monitoring of enclosed systems such as gearboxes. It remains a significant challenge to analyze vibration data that are generated under fluctuating operating conditions. This is especially true for situations where relatively little prior knowledge regarding the specific gearbox is available. It is therefore investigated how an adaptive time series model, which is based on Bayesian model selection, may be used to remove the non-fault related components in the structural response of a gear assembly to obtain a residual signal which is robust to fluctuating operating conditions. A statistical framework is subsequently proposed which may be used to interpret the structure of the residual signal in order to facilitate an intuitive understanding of the condition of the gear system. The proposed methodology is investigated on both simulated and experimental data from a single stage gearbox. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
A novel method for modelling the statistics of 2D photographic images useful in image restoration is defined. The new method is based on the Dual Tree Complex Wavelet Transform (DT-CWT) but a phase rotation is applied to the coefficients to create complex coefficients whose phase is shift-invariant at multiscale edge and ridge features. This is in addition to the magnitude shift invariance achieved by the DT-CWT. The increased correlation between coefficients adjacent in space and scale provides an improved mechanism for signal estimation. © 2006 IEEE.
Resumo:
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.