6 resultados para Training method

em Cambridge University Engineering Department Publications Database


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The nonlinear modelling ability of neural networks has been widely recognised as an effective tool to identify and control dynamic systems, with applications including nonlinear vehicle dynamics which this paper focuses on using multi-layer perceptron networks. Existing neural network literature does not detail some of the factors which effect neural network nonlinear modelling ability. This paper investigates into and concludes on required network size, structure and initial weights, considering results for networks of converged weights. The paper also presents an online training method and an error measure representing the network's parallel modelling ability over a range of operating conditions. Copyright © 2010 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of hidden Markov models is placed in a connectionist framework, and an alternative approach to improving their ability to discriminate between classes is described. Using a network style of training, a measure of discrimination based on the a posteriori probability of state occupation is proposed, and the theory for its optimization using error back-propagation and gradient ascent is presented. The method is shown to be numerically well behaved, and results are presented which demonstrate that when using a simple threshold test on the probability of state occupation, the proposed optimization scheme leads to improved recognition performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a novel method for the training of a complementary acoustic model with respect to set of given acoustic models. The method is based upon an extension of the Minimum Phone Error (MPE) criterion and aims at producing a model that makes complementary phone errors to those already trained. The technique is therefore called Complementary Phone Error (CPE) training. The method is evaluated using an Arabic large vocabulary continuous speech recognition task. Reductions in word error rate (WER) after combination with a CPE-trained system were obtained with up to 0.7% absolute for a system trained on 172 hours of acoustic data and up to 0.2% absolute for the final system trained on nearly 2000 hours of Arabic data.