962 resultados para Measurement errors


Relevância:

60.00% 60.00%

Publicador:

Resumo:

En junio de 2000 el Departamento Nacional de Estadística de Colombia adopto una nueva definición de medición de desempleo siguiendo los estándares sugeridos por la organización Internacional del Trabajo (OIT). El cambio de definición implico una reducción de la tasa de desempleo en cerca de dos puntos porcentuales. En este documento contrastamos la experiencia colombiana con otra experiencias internacionales, y analizamos las implicaciones empíricas y teóricas de este cambio de definición usando dos tipos de estimaciones cuantitativas: en la primera se contrasta las principales características de las diferentes categorías clasificadas según la definición nueva y vieja de desempleo (empleado, desempleado y fuera de la fuerza laboral) usando el algoritmo EM; en la segunda se pone a prueba la implicación del desempleo estructural y su relación con el perfil educacional de personas desempleadas y las características teóricas que enfrentan los estándares de la OIT en la definición de empleo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurately measured peptide masses can be used for large-scale protein identification from bacterial whole-cell digests as an alternative to tandem mass spectrometry (MS/MS) provided mass measurement errors of a few parts-per-million (ppm) are obtained. Fourier transform ion cyclotron resonance (FTICR) mass spectrometry (MS) routinely achieves such mass accuracy either with internal calibration or by regulating the charge in the analyzer cell. We have developed a novel and automated method for internal calibration of liquid chromatography (LC)/FTICR data from whole-cell digests using peptides in the sample identified by concurrent MS/MS together with ambient polydimethyl-cyclosiloxanes as internal calibrants in the mass spectra. The method reduced mass measurement error from 4.3 +/- 3.7 ppm to 0.3 +/- 2.3 ppm in an E. coli LC/FTICR dataset of 1000 MS and MS/MS spectra and is applicable to all analyses of complex protein digests by FTICRMS. Copyright (c) 2006 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unique residential history data with retrospective information on parental assets are used to study household wealth mobility in 141 villages in rural Bangladesh. Regression estimates of father–son correlations and analyses of intergenerational transition matrices show substantial persistence in wealth even when we correct for measurement errors in parental wealth. We do not find wealth mobility to be higher between periods of a person's life than between generations. We find that the process of household division plays an important role: sons who splinter off from the father's household experience greater (albeit downward) mobility in wealth. Despite significant occupational mobility across generations, its contribution to wealth mobility, net of human capital attainment of individuals, appears insignificant. Low wealth mobility in our data is primarily explained by intergenerational persistence in educational attainment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium-correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, impulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well. The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables. We consider forecasting performance over the Great Recession, and over an earlier more quiescent period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A. G. Patriota and A. J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655-1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper derives the second-order biases Of maximum likelihood estimates from a multivariate normal model where the mean vector and the covariance matrix have parameters in common. We show that the second order bias can always be obtained by means of ordinary weighted least-squares regressions. We conduct simulation studies which indicate that the bias correction scheme yields nearly unbiased estimators. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A literature survey and a theoretical study were performed to characterize residential chimney conditions for flue gas flow measurements. The focus is on Pitot-static probes to give sufficient basis for the development and calibration of a velocity pressure averaging probe suitable for the continuous dynamic (i.e. non steady state) measurement of the low flow velocities present in residential chimneys. The flow conditions do not meet the requirements set in ISO 10780 and ISO 3966 for Pitot-static probe measurements, and the methods and their uncertainties are not valid. The flow velocities in residential chimneys from a heating boiler under normal operating condi-tions are shown to be so low that they in some conditions result in voiding the assumptions of non-viscous fluid justifying the use of the quadratic Bernoulli equation. A non-linear Reynolds number dependent calibration coefficient that is correcting for the viscous effects is needed to avoid significant measurement errors. The wide range of flow velocity during normal boiler operation also results in the flow type changing from laminar, across the laminar to turbulent transition region, to fully turbulent flow, resulting in significant changes of the velocity profile during dynamic measurements. In addition, the short duct lengths (and changes of flow direction and duct shape) used in practice are shown to result in that the measurements are done in the hydrodynamic entrance region where the flow velocity profiles most likely are neither symmetrical nor fully developed. A measurement method insensitive to velocity profile changes is thus needed, if the flow velocity profile cannot otherwise be determined or predicted with reasonable accuracy for the whole measurement range. Because of particulate matter and condensing fluids in the flue gas it is beneficial if the probe can be constructed so that it can easily be taken out for cleaning, and equipped with a locking mechanism to always ensure the same alignment in the duct without affecting the calibration. The literature implies that there may be a significant time lag in the measurements of low flow rates due to viscous effects in the internal impact pressure passages of Pitot probes, and the significance in the discussed application should be studied experimentally. The measured differential pressures from Pitot-static probes in residential chimney flows are so low that the calibration and given uncertainties of commercially available pressure transducers are not adequate. The pressure transducers should be calibrated specifically for the application, preferably in combination with the probe, and the significance of all different error sources should be investigated carefully. Care should be taken also with the temperature measurement, e.g. with averaging of several sensors, as significant temperature gradients may be present in flue gas ducts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

GPS tracking of mobile objects provides spatial and temporal data for a broad range of applications including traffic management and control, transportation routing and planning. Previous transport research has focused on GPS tracking data as an appealing alternative to travel diaries. Moreover, the GPS based data are gradually becoming a cornerstone for real-time traffic management. Tracking data of vehicles from GPS devices are however susceptible to measurement errors – a neglected issue in transport research. By conducting a randomized experiment, we assess the reliability of GPS based traffic data on geographical position, velocity, and altitude for three types of vehicles; bike, car, and bus. We find the geographical positioning reliable, but with an error greater than postulated by the manufacturer and a non-negligible risk for aberrant positioning. Velocity is slightly underestimated, whereas altitude measurements are unreliable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The accurate measurement of a vehicle’s velocity is an essential feature in adaptive vehicle activated sign systems. Since the velocities of the vehicles are acquired from a continuous wave Doppler radar, the data collection becomes challenging. Data accuracy is sensitive to the calibration of the radar on the road. However, clear methodologies for in-field calibration have not been carefully established. The signs are often installed by subjective judgment which results in measurement errors. This paper develops a calibration method based on mining the data collected and matching individual vehicles travelling between two radars. The data was cleaned and prepared in two ways: cleaning and reconstructing. The results showed that the proposed correction factor derived from the cleaned data corresponded well with the experimental factor done on site. In addition, this proposed factor showed superior performance to the one derived from the reconstructed data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose
Several studies have examined the association between polyunsaturated fatty acids and prostate cancer risk. We evaluated the evidence on the association between the essential polyunsaturated fatty acid, known as α-linolenic acid, and the risk of prostate cancer in humans.
Materials and Methods
We comprehensively reviewed published studies on the association between α-linolenic acid and the risk of prostate cancer using MEDLINE.
Results
A number of studies have shown a positive association between dietary, plasma or red blood cell levels of α-linolenic acid and prostate cancer. Other studies have demonstrated either no association or a negative association. The limitations of these studies include the assumption that dietary or plasma α-linolenic acid levels are positively associated with prostate tissue α-linolenic acid levels, and measurement errors of dietary, plasma and red blood cell α-linolenic acid levels.
Conclusions
More research is needed in this area before it can be concluded that there is an association between α-linolenic acid and prostate cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A major challenge facing freshwater ecologists and managers is the development of models that link stream ecological condition to catchment scale effects, such as land use. Previous attempts to make such models have followed two general approaches. The bottom-up approach employs mechanistic models, which can quickly become too complex to be useful. The top-down approach employs empirical models derived from large data sets, and has often suffered from large amounts of unexplained variation in stream condition.

We believe that the lack of success of both modelling approaches may be at least partly explained by scientists considering too wide a breadth of catchment type. Thus, we believe that by stratifying large sets of catchments into groups of similar types prior to modelling, both types of models may be improved. This paper describes preliminary work using a Bayesian classification software package, ‘Autoclass’ (Cheeseman and Stutz 1996) to create classes of catchments within the Murray Darling Basin based on physiographic data.

Autoclass uses a model-based classification method that employs finite mixture modelling and trades off model fit versus complexity, leading to a parsimonious solution. The software provides information on the posterior probability that the classification is ‘correct’ and also probabilities for alternative classifications. The importance of each attribute in defining the individual classes is calculated and presented, assisting description of the classes. Each case is ‘assigned’ to a class based on membership probability, but the probability of membership of other classes is also provided. This feature deals very well with cases that do not fit neatly into a larger class. Lastly, Autoclass requires the user to specify the measurement error of continuous variables.

Catchments were derived from the Australian digital elevation model. Physiographic data werederived from national spatial data sets. There was very little information on measurement errors for the spatial data, and so a conservative error of 5% of data range was adopted for all continuous attributes. The incorporation of uncertainty into spatial data sets remains a research challenge.

The results of the classification were very encouraging. The software found nine classes of catchments in the Murray Darling Basin. The classes grouped together geographically, and followed altitude and latitude gradients, despite the fact that these variables were not included in the classification. Descriptions of the classes reveal very different physiographic environments, ranging from dry and flat catchments (i.e. lowlands), through to wet and hilly catchments (i.e. mountainous areas). Rainfall and slope were two important discriminators between classes. These two attributes, in particular, will affect the ways in which the stream interacts with the catchment, and can thus be expected to modify the effects of land use change on ecological condition. Thus, realistic models of the effects of land use change on streams would differ between the different types of catchments, and sound management practices will differ.

A small number of catchments were assigned to their primary class with relatively low probability. These catchments lie on the boundaries of groups of catchments, with the second most likely class being an adjacent group. The locations of these ‘uncertain’ catchments show that the Bayesian classification dealt well with cases that do not fit neatly into larger classes.

Although the results are intuitive, we cannot yet assess whether the classifications described in this paper would assist the modelling of catchment scale effects on stream ecological condition. It is most likely that catchment classification and modelling will be an iterative process, where the needs of the model are used to guide classification, and the results of classifications used to suggest further refinements to models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we examine the geometrically constrained optimization approach to localization with hybrid bearing (angle of arrival, AOA) and time difference of  arrival (TDOA) sensors. In particular, we formulate a constraint on the measurement errors which is then used along with constraint-based optimization tools in order to estimate the maximum likelihood values of the errors given an appropriate cost function. In particular we focus on deriving a localization algorithm for stationary target localization in the so-called adverse localization geometries where the relative positioning of the sensors and the target do not readily permit accurate or convergent localization using traditional approaches. We illustrate this point via simulation and we compare our approach to a number of different techniques that are discussed in the literature.