165 resultados para Measurement error models
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
This paper deals with asymptotic results on a multivariate ultrastructural errors-in-variables regression model with equation errors Sufficient conditions for attaining consistent estimators for model parameters are presented Asymptotic distributions for the line regression estimators are derived Applications to the elliptical class of distributions with two error assumptions are presented The model generalizes previous results aimed at univariate scenarios (C) 2010 Elsevier Inc All rights reserved
Resumo:
Objective: This study aimed to assess the relative validity of a food frequency questionnaire.(FFQ), previously validated to measure usual intakes in adults, for measuring dietary intakes in children 5 to 10 y of age. Methods: Dietary intakes were measured using an FFQ and a 3-d dietary record. Healthy children, 5 to 10 y old (n = 151), were recruited from public schools and asked to answer the questions in the FFQ and to provide non-consecutive 3-d dietary records based on reported estimated portion sizes. Paired sample t tests and Pearson`s correlation coefficients were conducted to determine whether the two instruments reported similar values for energy and nutrients. The agreement of quartile categorization between the two instruments was also examined. Results: Estimated energy and nutrient intakes derived from the FFQ were significantly higher than those derived from 3-d dietary records. As expected, Pearson`s correlations increased after adjusting for residual measurement error, presumably due to exclusion of the high within-person variability in intake of these nutrients. Moderate to high (r > 0.50) correlation coefficients were verified for some nutrients such as calcium, folate, vitamin 132, vitamin A, and vitamin C. Conclusion: This FFQ, originally developed for use in adults, appears to overestimate usual energy and nutrient intakes in children 5 to 10 y of age. Further work is necessary to conduct a calibration study to establish adequate portion sizes before instrument adoption in this population. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
The Natural History of Human Papillomavirus (HPV) Infection in Men: The HIM Study is a prospective multi-center cohort study that, among other factors, analyzes participants` diet. A parallel cross-sectional study was designed to evaluate the validity and reproducibility of the quantitative food frequency questionnaire (QFFQ) used in the Brazilian center from the HIM Study. For this, a convenience subsample of 98 men aged 18 to 70 years from the HIM Study in Brazil answered three 54-item QFFQ and three 24-hour recall interviews, with 6-month intervals between them (data collection January to September 2007). A Bland-Altman analysis indicated that the difference between instruments was dependent on the magnitude of the intake for energy and most nutrients included in the validity analysis, with the exception of carbohydrates, fiber, polyunsaturated fat, vitamin C, and vitamin E. The correlation between the QFFQ and the 24-hour recall for the deattenuated and energy-adjusted data ranged from 0.05 (total fat) to 0.57 (calcium). For the energy and nutrients consumption included in the validity analysis, 33.5% of participants on average were correctly classified into quartiles, and the average value of 0.26 for weighted kappa shows a reasonable agreement. The intraclass correlation coefficients for all nutrients were greater than 0.40 in the reproducibility analysis. The QFFQ demonstrated good reproducibility and acceptable validity. The results support the use of this instrument in the HIM Study. J Am Diet Assoc. 2011;111:1045-1051.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).
Resumo:
PHENIX has measured the e(+)e(-) pair continuum in root s(NN) = 200 GeV Au+Au and p+p collisions over a wide range of mass and transverse momenta. The e(+)e(-) yield is compared to the expectations from hadronic sources, based on PHENIX measurements. In the intermediate-mass region, between the masses of the phi and the J/psi meson, the yield is consistent with expectations from correlated c (c) over bar production, although other mechanisms are not ruled out. In the low-mass region, below the phi, the p+p inclusive mass spectrum is well described by known contributions from light meson decays. In contrast, the Au+Au minimum bias inclusive mass spectrum in this region shows an enhancement by a factor of 4.7 +/- 0.4(stat) +/- 1.5(syst) +/- 0.9(model). At low mass (m(ee) < 0.3 GeV/c(2)) and high p(T) (1 < p(T) < 5 GeV/c) an enhanced e(+)e(-) pair yield is observed that is consistent with production of virtual direct photons. This excess is used to infer the yield of real direct photons. In central Au+Au collisions, the excess of the direct photon yield over the p+p is exponential in p(T), with inverse slope T = 221 +/- 19(stat) +/- 19(syst) MeV. Hydrodynamical models with initial temperatures ranging from T(init) similar or equal to 300-600 MeV at times of 0.6-0.15 fm/c after the collision are in qualitative agreement with the direct photon data in Au+Au. For low p(T) < 1 GeV/c the low-mass region shows a further significant enhancement that increases with centrality and has an inverse slope of T similar or equal to 100 MeV. Theoretical models underpredict the low-mass, low-p(T) enhancement.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
High precision measurements of the differential cross sections for pi(0) photoproduction at forward angles for two nuclei, (12)C and (208)Pb, have been performed for incident photon energies of 4.9-5.5 GeV to extract the pi(0) -> gamma gamma decay width. The experiment was done at Jefferson Lab using the Hall B photon tagger and a high-resolution multichannel calorimeter. The pi(0) -> gamma gamma decay width was extracted by fitting the measured cross sections using recently updated theoretical models for the process. The resulting value for the decay width is Gamma(pi(0) -> gamma gamma) = 7.82 +/- 0.14(stat) +/- 0.17(syst) eV. With the 2.8% total uncertainty, this result is a factor of 2.5 more precise than the current Particle Data Group average of this fundamental quantity, and it is consistent with current theoretical predictions.
Resumo:
This paper describes a new and simple method to determine the molecular weight of proteins in dilute solution, with an error smaller than similar to 10%, by using the experimental data of a single small-angle X-ray scattering (SAXS) curve measured on a relative scale. This procedure does not require the measurement of SAXS intensity on an absolute scale and does not involve a comparison with another SAXS curve determined from a known standard protein. The proposed procedure can be applied to monodisperse systems of proteins in dilute solution, either in monomeric or multimeric state, and it has been successfully tested on SAXS data experimentally determined for proteins with known molecular weights. It is shown here that the molecular weights determined by this procedure deviate from the known values by less than 10% in each case and the average error for the test set of 21 proteins was 5.3%. Importantly, this method allows for an unambiguous determination of the multimeric state of proteins with known molecular weights.
Resumo:
Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.