862 resultados para Error correction model
Resumo:
The majority of computational studies of confined explosion hazards apply simple and inaccurate combustion models, requiring ad hoc corrections to obtain realistic flame shapes and often predicting an order of magnitude error in the overpressures. This work describes the application of a laminar flamelet model to a series of two-dimensional test cases. The model is computationally efficient applying an algebraic expression to calculate the flame surface area, an empirical correlation for the laminar flame speed and a novel unstructured, solution adaptive numerical grid system which allows important features of the solution to be resolved close to the flame. Accurate flame shapes are predicted, the correct burning rate is predicted near the walls, and an improvement in the predicted overpressures is obtained. However, in these fully turbulent calculations the overpressures are still too high and the flame arrival times too low, indicating the need for a model for the early laminar burning phase. Due to the computational expense, it is unrealistic to model a laminar flame in the complex geometries involved and therefore a pragmatic approach is employed which constrains the flame to propagate at the laminar flame speed. Transition to turbulent burning occurs at a specified turbulent Reynolds number. With the laminar phase model included, the predicted flame arrival times increase significantly, but are still too low. However, this has no significant effect on the overpressures, which are predicted accurately for a baffled channel test case where rapid transition occurs once the flame reaches the first pair of baffles. In a channel with obstacles on the centreline, transition is more gradual and the accuracy of the predicted overpressures is reduced. However, although the accuracy is still less than desirable in some cases, it is much better than the order of magnitude error previously expected.
Resumo:
A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.
Resumo:
This study compared adaptation in novel force fields where trajectories were initially either stable or unstable to elucidate the processes of learning novel skills and adapting to new environments. Subjects learned to move in a null force field (NF), which was unexpectedly changed either to a velocity-dependent force field (VF), which resulted in perturbed but stable hand trajectories, or a position-dependent divergent force field (DF), which resulted in unstable trajectories. With practice, subjects learned to compensate for the perturbations produced by both force fields. Adaptation was characterized by an initial increase in the activation of all muscles followed by a gradual reduction. The time course of the increase in activation was correlated with a reduction in hand-path error for the DF but not for the VF. Adaptation to the VF could have been achieved solely by formation of an inverse dynamics model and adaptation to the DF solely by impedance control. However, indices of learning, such as hand-path error, joint torque, and electromyographic activation and deactivation suggest that the CNS combined these processes during adaptation to both force fields. Our results suggest that during the early phase of learning there is an increase in endpoint stiffness that serves to reduce hand-path error and provides additional stability, regardless of whether the dynamics are stable or unstable. We suggest that the motor control system utilizes an inverse dynamics model to learn the mean dynamics and an impedance controller to assist in the formation of the inverse dynamics model and to generate needed stability.
Resumo:
Recent developments in modeling driver steering control with preview are reviewed. While some validation with experimental data has been presented, the rigorous application of formal system identification methods has not yet been attempted. This paper describes a steering controller based on linear model-predictive control. An indirect identification method that minimizes steering angle prediction error is developed. Special attention is given to filtering the prediction error so as to avoid identification bias that arises from the closed-loop operation of the driver-vehicle system. The identification procedure is applied to data collected from 14 test drivers performing double lane change maneuvers in an instrumented vehicle. It is found that the identification procedure successfully finds parameter values for the model that give small prediction errors. The procedure is also able to distinguish between the different steering strategies adopted by the test drivers. © 2006 IEEE.
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. The standard approach involves only cross adapting acoustic models. To fully exploit the complimentary features among sub-systems, language model (LM) cross adaptation techniques can be used. Previous research on multi-level n-gram LM cross adaptation is extended to further include the cross adaptation of neural network LMs in this paper. Using this improved LM cross adaptation framework, significant error rate gains of 4.0%-7.1% relative were obtained over acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. Copyright © 2011 ISCA.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.
Resumo:
This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple sub-systems that may even be developed at different sites. Cross system adaptation, in which model adaptation is performed using the outputs from another sub-system, can be used as an alternative to hypothesis level combination schemes such as ROVER. Normally cross adaptation is only performed on the acoustic models. However, there are many other levels in LVCSR systems' modelling hierarchy where complimentary features may be exploited, for example, the sub-word and the word level, to further improve cross adaptation based system combination. It is thus interesting to also cross adapt language models (LMs) to capture these additional useful features. In this paper cross adaptation is applied to three forms of language models, a multi-level LM that models both syllable and word sequences, a word level neural network LM, and the linear combination of the two. Significant error rate reductions of 4.0-7.1% relative were obtained over ROVER and acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper introduces a novel method for the training of a complementary acoustic model with respect to set of given acoustic models. The method is based upon an extension of the Minimum Phone Error (MPE) criterion and aims at producing a model that makes complementary phone errors to those already trained. The technique is therefore called Complementary Phone Error (CPE) training. The method is evaluated using an Arabic large vocabulary continuous speech recognition task. Reductions in word error rate (WER) after combination with a CPE-trained system were obtained with up to 0.7% absolute for a system trained on 172 hours of acoustic data and up to 0.2% absolute for the final system trained on nearly 2000 hours of Arabic data.
Resumo:
The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.
Resumo:
A new theoretical model of Pattern Recognition principles was proposed, which is based on "matter cognition" instead of "matter classification" in traditional statistical Pattern Recognition. This new model is closer to the function of human being, rather than traditional statistical Pattern Recognition using "optimal separating" as its main principle. So the new model of Pattern Recognition is called the Biomimetic Pattern Recognition (BPR)(1). Its mathematical basis is placed on topological analysis of the sample set in the high dimensional feature space. Therefore, it is also called the Topological Pattern Recognition (TPR). The fundamental idea of this model is based on the fact of the continuity in the feature space of any one of the certain kinds of samples. We experimented with the Biomimetic Pattern Recognition (BPR) by using artificial neural networks, which act through covering the high dimensional geometrical distribution of the sample set in the feature space. Onmidirectionally cognitive tests were done on various kinds of animal and vehicle models of rather similar shapes. For the total 8800 tests, the correct recognition rate is 99.87%. The rejection rate is 0.13% and on the condition of zero error rates, the correct rate of BPR was much better than that of RBF-SVM.
Resumo:
This paper attempts to develop a reduction-based model updating technique for jacket offshore platform structure. A reduced model is used instead of the direct finite-element model of the real structure in order to circumvent such difficulties as huge degrees of freedom and incomplete experimental data that are usually civil engineers' trouble during the model updating. The whole process consists of three steps: reduction of FE model, the first model updating to minimize the reduction error, and the second model updating to minimize the modeling error of the reduced model and the real structure. According to the performance of jacket platforms, a local-rigidity assumption is employed to obtain the reduced model. The technique is applied in a downscale model of a four-legged offshore platform where its effectiveness is well proven. Furthermore, a comparison between the real structure and its numerical models in the following model validation shows that the updated models have good approximation to the real structure. Besides, some difficulties in the field of model updating are also discussed.
Resumo:
Target transformation factor analysis was used to correct spectral interference in inductively coupled plasma atomic emission spectrometry (ICP-BES) for the determination of rare earth impurities in high purity thulium oxide. Data matrix was constructed with pure and mixture vectors and background vector. A method based on an error evaluation function was proposed to optimize the peak position, so the influence of the peak position shift in spectral scans on the determination was eliminated or reduced. Satisfactory results were obtained using factor analysis and the proposed peak position optimization method.