947 resultados para Error de medida
Resumo:
A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.
Resumo:
This paper introduces a novel method for the training of a complementary acoustic model with respect to set of given acoustic models. The method is based upon an extension of the Minimum Phone Error (MPE) criterion and aims at producing a model that makes complementary phone errors to those already trained. The technique is therefore called Complementary Phone Error (CPE) training. The method is evaluated using an Arabic large vocabulary continuous speech recognition task. Reductions in word error rate (WER) after combination with a CPE-trained system were obtained with up to 0.7% absolute for a system trained on 172 hours of acoustic data and up to 0.2% absolute for the final system trained on nearly 2000 hours of Arabic data.
Resumo:
This paper studies the random-coding exponent of joint source-channel coding for a scheme where source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class index of the source message. For discrete memoryless systems, two optimally chosen classes and product distributions are found to be sufficient to attain the sphere-packing exponent in those cases where it is tight. © 2014 IEEE.
Resumo:
The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.
Resumo:
A new method to measure reciprocal four-port structures, using a 16-term error model, is presented. The measurement is based on 5 two-port calibration standards connected to two of the ports, while the network analyzer is connected to the two remaining ports. Least-squares-fit data reduction techniques are used to lower error sensitivity. The effect of connectors is deembedded using closed-form equations. (C) 2007 Wiley Periodicals, Inc.
Resumo:
Formulation of a 16-term error model, based on the four-port ABCD-matrix and voltage and current variables, is outlined. Matrices A, B, C, and D are each 2 x 2 submatrices of the complete 4 x 4 error matrix. The corresponding equations are linear in terms of the error parameters, which simplifies the calibration process. The parallelism with the network analyzer calibration procedures and the requirement of five two-port calibration measurements are stressed. Principles for robust choice of equations are presented. While the formulation is suitable for any network analyzer measurement, it is expected to be a useful alternative for the nonlinear y-parameter approach used in intrinsic semiconductor electrical and noise parameter measurements and parasitics' deembedding.
Resumo:
In recognition-based user interface, users’ satisfaction is determined not only by recognition accuracy but also by effort to correct recognition errors. In this paper, we introduce a crossmodal error correction technique, which allows users to correct errors of Chinese handwriting recognition by speech. The focus of the paper is a multimodal fusion algorithm supporting the crossmodal error correction. By fusing handwriting and speech recognition, the algorithm can correct errors in both character extraction and recognition of handwriting. The experimental result indicates that the algorithm is effective and efficient. Moreover, the evaluation also shows the correction technique can help users to correct errors in handwriting recognition more efficiently than the other two error correction techniques.
Resumo:
With the intermediate-complexity Zebiak-Cane model, we investigate the 'spring predictability barrier' (SPB) problem for El Nino events by tracing the evolution of conditional nonlinear optimal perturbation (CNOP), where CNOP is superimposed on the El Nino events and acts as the initial error with the biggest negative effect on the El Nino prediction. We show that the evolution of CNOP-type errors has obvious seasonal dependence and yields a significant SPB, with the most severe occurring in predictions made before the boreal spring in the growth phase of El Nino. The CNOP-type errors can be classified into two types: one possessing a sea-surface-temperature anomaly pattern with negative anomalies in the equatorial central-western Pacific, positive anomalies in the equatorial eastern Pacific, and a thermocline depth anomaly pattern with positive anomalies along the Equator, and another with patterns almost opposite to those of the former type. In predictions through the spring in the growth phase of El Nino, the initial error with the worst effect on the prediction tends to be the latter type of CNOP error, whereas in predictions through the spring in the decaying phase, the initial error with the biggest negative effect on the prediction is inclined to be the former type of CNOP error. Although the linear singular vector (LSV)-type errors also have patterns similar to the CNOP-type errors, they cover a more localized area than the CNOP-type errors and cause a much smaller prediction error, yielding a less significant SPB. Random errors in the initial conditions are also superimposed on El Nino events to investigate the SPB. We find that, whenever the predictions start, the random errors neither exhibit an obvious season-dependent evolution nor yield a large prediction error, and thus may not be responsible for the SPB phenomenon for El Nino events. These results suggest that the occurrence of the SPB is closely related to particular initial error patterns. The two kinds of CNOP-type error are most likely to cause a significant SPB. They have opposite signs and, consequently, opposite growth behaviours, a result which may demonstrate two dynamical mechanisms of error growth related to SPB: in one case, the errors grow in a manner similar to El Nino; in the other, the errors develop with a tendency opposite to El Nino. The two types of CNOP error may be most likely to provide the information regarding the 'sensitive area' of El Nino-Southern Oscillation (ENSO) predictions. If these types of initial error exist in realistic ENSO predictions and if a target method or a data assimilation approach can filter them, the ENSO forecast skill may be improved. Copyright (C) 2009 Royal Meteorological Society
Resumo:
2004
Resumo:
Affine transformations are often used in recognition systems, to approximate the effects of perspective projection. The underlying mathematics is for exact feature data, with no positional uncertainty. In practice, heuristics are added to handle uncertainty. We provide a precise analysis of affine point matching, obtaining an expression for the range of affine-invariant values consistent with bounded uncertainty. This analysis reveals that the range of affine-invariant values depends on the actual $x$-$y$-positions of the features, i.e. with uncertainty, affine representations are not invariant with respect to the Cartesian coordinate system. We analyze the effect of this on geometric hashing and alignment recognition methods.
Resumo:
The recognition of objects with smooth bounding surfaces from their contour images is considerably more complicated than that of objects with sharp edges, since in the former case the set of object points that generates the silhouette contours changes from one view to another. The "curvature method", developed by Basri and Ullman [1988], provides a method to approximate the appearance of such objects from different viewpoints. In this paper we analyze the curvature method. We apply the method to ellipsoidal objects and compute analytically the error obtained for different rotations of the objects. The error depends on the exact shape of the ellipsoid (namely, the relative lengths of its axes), and it increases a sthe ellipsoid becomes "deep" (elongated in the Z-direction). We show that the errors are usually small, and that, in general, a small number of models is required to predict the appearance of an ellipsoid from all possible views. Finally, we show experimentally that the curvature method applies as well to objects with hyperbolic surface patches.
Resumo:
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem.