14 resultados para Accuracy.
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.
Resumo:
When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.
Resumo:
Foreign trade statistics are the main data source to the study of international trade.However its accuracy has been under suspicion since Morgernstern published hisfamous work in 1963. Federico and Tena (1991) have resumed the question arguing thatthey can be useful in an adequate level of aggregation. But the geographical assignmentproblem remains unsolved. This article focuses on the spatial variable through theanalysis of the reliability of textile international data for 1913. A geographical biasarises between export and import series, but because of its quantitative importance it canbe negligible in an international scale.
Resumo:
This paper proposes a nonparametric test in order to establish the level of accuracy of theforeign trade statistics of 17 Latin American countries when contrasted with the trade statistics of the main partners in 1925. The Wilcoxon Matched-Pairs Ranks test is used to determine whether the differences between the data registered by exporters and importers are meaningful, and if so, whether the differences are systematic in any direction. The paper tests for the reliability of the data registered for two homogeneous products, petroleum and coal, both in volume and value. The conclusion of the several exercises performed is that we cannot accept the existence of statistically significant differences between the data provided by the exporters and the registered by the importing countries in most cases. The qualitative historiography of Latin American describes its foreign trade statistics as mostly unusable. Our quantitative results contest this view.
Resumo:
Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples
Resumo:
The uncertainties inherent to experimental differential scanning calorimetric data are evaluated. A new procedure is developed to perform the kinetic analysis of continuous heating calorimetric data when the heat capacity of the sample changes during the crystallization. The accuracy of isothermal calorimetric data is analyzed in terms of the peak-to-peak noise of the calorimetric signal and base line drift typical of differential scanning calorimetry equipment. Their influence in the evaluation of the kinetic parameters is discussed. An empirical construction of the time-temperature and temperature heating rate transformation diagrams, grounded on the kinetic parameters, is presented. The method is applied to the kinetic study of the primary crystallization of Te in an amorphous alloy of nominal composition Ga20Te80, obtained by rapid solidification.
Resumo:
The accuracy of peritoneoscopy and liver biopsy in the diagnosis of hepatic cirrhosis was compared in 473 consecutive patients submitted to both procedures. One hundred and fifty-two of them had cirrhosis diagnosed by one or both methods. There was 73% agreement between the two procedures. `Apparent' false-negative results were 17·7% for peritoneoscopy and 9·3% for liver biopsy. The incidence of false-negative results in the diagnosis of cirrhosis can be reduced by combining both procedures.
Resumo:
In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.
Resumo:
Budget forecasts have become increasingly important as a tool of fiscal management to influence expectations of bond markets and the public at large. The inherent difficulty in projecting macroeconomic variables – together with political bias – thwart the accuracy of budget forecasts. We improve accuracy by combining the forecasts of both private and public agencies for Italy over the period 1993-2012. A weighted combined forecast of the deficit/ ratio is superior to any single forecast. Deficits are hard to predict due to shifting economic conditions and political events. We test and compare predictive accuracy over time and although a weighted combined forecast is robust to breaks, there is no significant improvement over a simple RW model.
Resumo:
Budget forecasts have become increasingly important as a tool of fiscal management to influence expectations of bond markets and the public at large. The inherent difficulty in projecting macroeconomic variables – together with political bias – thwart the accuracy of budget forecasts. We improve accuracy by combining the forecasts of both private and public agencies for Italy over the period 1993-2012. A weighted combined forecast of the deficit/ ratio is superior to any single forecast. Deficits are hard to predict due to shifting economic conditions and political events. We test and compare predictive accuracy over time and although a weighted combined forecast is robust to breaks, there is no significant improvement over a simple RW model.