939 resultados para measurement error models
Resumo:
Background: Reliability or validity studies are important for the evaluation of measurement error in dietary assessment methods. An approach to validation known as the method of triads uses triangulation techniques to calculate the validity coefficient of a food-frequency questionnaire (FFQ). Objective: To assess the validity of an FFQ estimates of carotenoid and vitamin E intake against serum biomarker measurements and weighed food records (WFRs), by applying the method of triads. Design: The study population was a sub-sample of adult participants in a randomised controlled trial of beta-carotene and sunscreen in the prevention of skin cancer. Dietary intake was assessed by a self-administered FFQ and a WFR. Nonfasting blood samples were collected and plasma analysed for five carotenoids (alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein, lycopene) and vitamin E. Correlation coefficients were calculated between each of the dietary methods and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficients were estimated using bootstrap sampling. Results: The validity coefficients of the FFQ were highest for alpha-carotene (0.85) and lycopene (0.62), followed by beta- carotene (0.55) and total carotenoids (0.55), while the lowest validity coefficient was for lutein (0.19). The method of triads could not be used for b- cryptoxanthin and vitamin E, as one of the three underlying correlations was negative. Conclusions: Results were similar to other studies of validity using biomarkers and the method of triads. For many dietary factors, the upper limit of the validity coefficients was less than 0.5 and therefore only strong relationships between dietary exposure and disease will be detected.
Resumo:
The sources of covariation among cognitive measures of Inspection Time, Choice Reaction Time, Delayed Response Speed and Accuracy, and IQ were examined in a classical twin design that included 245 monozygotic (MZ) and 298 dizygotic (DZ) twin pairs. Results indicated that a factor model comprising additive genetic and unique environmental effects was the most parsimonious. In this model, a general genetic cognitive factor emerged with factor loadings ranging from 0.28 to 0.64. Three other genetic factors explained the remaining genetic covariation between various speed and Delayed Response measures with IQ. However, a large proportion of the genetic variation in verbal (54%) and performance (25%) IQ was unrelated to these lower order cognitive measures. The independent genetic IQ variation may reflect information processes not captured by the elementary cognitive tasks, Inspection Time and Choice Reaction Time, nor our working memory task, Delayed Response. Unique environmental effects were mostly nonoverlapping, and partly represented test measurement error.
Resumo:
Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.
Resumo:
This study examined whether the effectiveness of human resource management (HRM)practices is contingent on organizational climate and competitive strategy The concepts of internol and external fit suggest that the positive relationship between HRM and subsequent productivity will be stronger for firms with a positive organizational climate and for firms using differentiation strategies. Resource allocation theories of motivation, on the other hand, predict that the relationship between HRM and productivity will be stronger for firms with a poor climate because employees working in these firms should have the greatest amount of spare capacity. The results supported the resource allocation argument.
Resumo:
Background: The epidemiology of a disease describes numbers of people becoming incident, being prevalent, recovering, surviving, and dying from the disease or from other causes. As a matter of accounting principle, the inflow, stock, and outflows must be compatible, and if we could observe completely every person involved, the epidemiologic estimates describing the disease would be consistent. Lack of consistency is an indicator for possible measurement error. Methods: We examined the consistency of estimates of incidence, prevalence, and excess mortality of dementia from the Rotterdam Study. We used the incidence and excess mortality estimates to calculate with a mathematical disease model a predicted prevalence, and compared the predicted to the observed prevalence. Results: Predicted prevalence is in most age groups lower than observed, and the difference between them is significant for some age groups. Conclusions: The observed discrepancy could be due to overestimates of prevalence or excess mortality, or an underestimate of incidence, or a combination of all three. We conclude from an analysis of possible causes that it is not possible to say which contributes most to the discrepancy. Estimating dementia incidence in an aging cohort presents a dilemma: with a short follow-up border-line incident cases are easily missed, and with longer follow-up measurement problems increase due to the associated aging of the cohort. Checking for consistency is a useful strategy to signal possible measurement error, but some sources of error may be impossible to avoid.
Resumo:
There is some evidence that dietary factors may modify the risk of squamous cell carcinoma (SCC) of the skin, but the association between food intake and SCC has not been evaluated prospectively. We examined the association between food intake and SCC incidence among 1,056 randomly selected adults living in an Australian sub-tropical community. Measurement-error corrected estimates of intake in 15 food groups were defined from a validated food frequency questionnaire in 1992. Associations with SCC risk were assessed using Poisson and negative binomial regression to the persons affected and tumour counts, respectively, based on incident, histologically confirmed tumours occurring between 1992 and 2002. After multivariable adjustment, none of the food groups was significantly associated with SCC risk. Stratified analysis in participants with a past history of skin cancer showed a decreased risk of SCC tumours for high intakes of green leafy vegetables (RR = 0.45, 95% CI = 0.22-0.91; p for trend = 0.02) and an increased risk for high intake of unmodified dairy products (RR = 2.53, 95% CI: 1.15-5.54; p for trend = 0.03). Food intake was not associated with SCC risk in persons who had no past history of skin cancer. These findings suggest that consumption of green leafy vegetables may help prevent development of subsequent SCCs of the skin among people with previous skin cancer and that consumption of unmodified dairy products, such as whole milk, cheese and yoghurt, may increase SCC risk in susceptible persons.
Resumo:
Purpose: This study was conducted to examine the test-retest reliability of a measure of prediagnosis physical activity participation administered to colorecial cancer survivors recruited from a population-based state cancer registry. Methods: A total of 112 participants completed two telephone interviews. I month apart, reporting usual weekly physical activity in the year before their cancer diagnosis. Intraclass correlation coefficients (ICC) and standard en-or of measurement (SEM) were used to describe the test-retest reliability of the measure across the sample: the Bland-Altman approach was used to describe reliability at the individual level. The test-retest reliability for categorized total physical activity (active, insufficiently active, sedentary) was assessed using the kappa statistic. Results: When the complete sample was considered, the ICC ranged from 0.40 (95% Cl: 0.24, 0.55) for vigorous gardening to 0.77 (95% Cl: 0.68, 0.84) for moderate physical activity. The SEM, however, were large. indicating high measurement error. The Bland-Altman plots indicated that the reproducibility of data decreases as the aniount of physical activity reported each week increases The kappa coefficient for the categorized data was 0.62 (95% Cl: 0.48, 0.76). Conclusion: Overall. the results indicated low levels of repeatability for this measure of historical physical activity. Categorizing participants as active, insufficiently active, or sedentary provides a higher level of test-retest reliability.
Resumo:
The consensus from published studies is that plasma lipids are each influenced by genetic factors, and that this contributes to genetic variation in risk of cardiovascular disease. Heritability estimates for lipids and lipoproteins are in the range .48 to .87, when measured once per study participant. However, this ignores the confounding effects of biological variation measurement error and ageing, and a truer assessment of genetic effects on cardiovascular risk may be obtained from analysis of longitudinal twin or family data. We have analyzed information on plasma high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol, and triglycerides, from 415 adult twins who provided blood on two to five occasions over 10 to 17 years. Multivariate modeling of genetic and environmental contributions to variation within and across occasions was used to assess the extent to which genetic and environmental factors have long-term effects on plasma lipids. Results indicated that more than one genetic factor influenced HDL and LDL components of cholesterol, and triglycerides over time in all studies. Nonshared environmental factors did not have significant long-term effects except for HDL. We conclude that when heritability of lipid risk factors is estimated on only one occasion, the existence of biological variation and measurement errors leads to underestimation of the importance of genetic factors as a cause of variation in long-term risk within the population. In addition our data suggest that different genes may affect the risk profile at different ages.
Bias, precision and heritability of self-reported and clinically measured height in Australian twins
Resumo:
Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.
Resumo:
Growth of complexity and functional importance of integrated navigation systems (INS) leads to high losses at the equipment refusals. The paper is devoted to the INS diagnosis system development, allowing identifying the cause of malfunction. The proposed solutions permit taking into account any changes in sensors dynamic and accuracy characteristics by means of the appropriate error models coefficients. Under actual conditions of INS operation, the determination of current values of the sensor models and estimation filter parameters rely on identification procedures. The results of full-scale experiments are given, which corroborate the expediency of INS error models parametric identification in bench test process.
Resumo:
Location estimation is important for wireless sensor network (WSN) applications. In this paper we propose a Cramer-Rao Bound (CRB) based analytical approach for two centralized multi-hop localization algorithms to get insights into the error performance and its sensitivity to the distance measurement error, anchor node density and placement. The location estimation performance is compared with four distributed multi-hop localization algorithms by simulation to evaluate the efficiency of the proposed analytical approach. The numerical results demonstrate the complex tradeoff between the centralized and distributed localization algorithms on accuracy, complexity and communication overhead. Based on this analysis, an efficient and scalable performance evaluation tool can be designed for localization algorithms in large scale WSNs, where simulation-based evaluation approaches are impractical. © 2013 IEEE.
Resumo:
In the Light Controlled Factory part-to-part assembly and reduced weight will be enabled through the use of predictive fitting processes; low cost high accuracy reconfigurable tooling will be made possible by active compensation; improved control will allow accurate robotic machining; and quality will be improved through the use of traceable uncertainty based quality control throughout the production system. A number of challenges must be overcome before this vision will be realized; 1) controlling industrial robots for accurate machining; 2) compensation of measurements for thermal expansion; 3) Compensation of measurements for refractive index changes; 4) development of Embedded Metrology Tooling for in-tooling measurement and active tooling compensation; and 5) development of Software for the Planning and Control of Integrated Metrology Networks based on Quality Control with Uncertainty Evaluation and control systems for predictive processes. This paper describes how these challenges are being addressed, in particular the central challenge of developing large volume measurement process models within an integrated dimensional variation management (IDVM) system.
Resumo:
Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.