957 resultados para Training Plan
Resumo:
This report presents presentations from representatives of 12 countries, key outcomes and recommendations for the future.
Resumo:
Research included: population structure of Indian mackerel (Rastrelliger kanagurta); a National Plan of Action for the conservation and management of sharks; levels of heavy metals in shark products; and a database on rays.
Resumo:
Recurrence is a key characteristic in the development of epilepsy. It remains unclear whether seizure recurrence is sensitive to postseizure stress. Here, tonic-clonic seizures were induced with a convulsive dose of pentylenetetrazole (PTZ), and acute seizure recurrence was evoked with a subconvulsive dose of the drug. We found that stress inhibited seizure recurrence when applied 30 minutes or 2 hours, but not 4 hours, after the tonic-clonic seizure. The time-dependent anti-recurrence effect of stress was mimicked by the stress hormone corticosterone and blocked by co-administration of mineralocorticoid and glucocorticoid receptor antagonists. Furthermore, in a PTZ-induced epileptic kindling model, corticosterone administered 30 minutes after each seizure decreased the extent of seizures both during the kindling establishment and in the following challenge test. These results provide novel insights into both the mechanisms of and therapeutic strategies for epilepsy. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Trawling experiments carried out by the United Nations Development Programme Project and the Uganda Department of Fisheries, strongly suggest that the trawling method of fishing, if introduced on Lake Victoria, would bring about a tremendous increase in fish production from the lake. It is recognised, however, that before trawling is introduced, its economic, social, technical, biological and manpower implications must be carefully analysed. I now propose to discuss the training aspects of a trawl fishery on Lake Victoria.
Resumo:
This paper describes a structured SVM framework suitable for noise-robust medium/large vocabulary speech recognition. Several theoretical and practical extensions to previous work on small vocabulary tasks are detailed. The joint feature space based on word models is extended to allow context-dependent triphone models to be used. By interpreting the structured SVM as a large margin log-linear model, illustrates that there is an implicit assumption that the prior of the discriminative parameter is a zero mean Gaussian. However, depending on the definition of likelihood feature space, a non-zero prior may be more appropriate. A general Gaussian prior is incorporated into the large margin training criterion in a form that allows the cutting plan algorithm to be directly applied. To further speed up the training process, 1-slack algorithm, caching competing hypothesis and parallelization strategies are also proposed. The performance of structured SVMs is evaluated on noise corrupted medium vocabulary speech recognition task: AURORA 4. © 2011 IEEE.
Resumo:
In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.
Resumo:
Vector Taylor Series (VTS) model based compensation is a powerful approach for noise robust speech recognition. An important extension to this approach is VTS adaptive training (VAT), which allows canonical models to be estimated on diverse noise-degraded training data. These canonical model can be estimated using EM-based approaches, allowing simple extensions to discriminative VAT (DVAT). However to ensure a diagonal corrupted speech covariance matrix the Jacobian (loading matrix) relating the noise and clean speech is diagonalised. In this work an approach for yielding optimal diagonal loading matrices based on minimising the expected KL-divergence between the diagonal loading matrix and "correct" distributions is proposed. The performance of DVAT using the standard and optimal diagonalisation was evaluated on both in-car collected data and the Aurora4 task. © 2012 IEEE.
Resumo:
A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.
Resumo:
This paper introduces a novel method for the training of a complementary acoustic model with respect to set of given acoustic models. The method is based upon an extension of the Minimum Phone Error (MPE) criterion and aims at producing a model that makes complementary phone errors to those already trained. The technique is therefore called Complementary Phone Error (CPE) training. The method is evaluated using an Arabic large vocabulary continuous speech recognition task. Reductions in word error rate (WER) after combination with a CPE-trained system were obtained with up to 0.7% absolute for a system trained on 172 hours of acoustic data and up to 0.2% absolute for the final system trained on nearly 2000 hours of Arabic data.