993 resultados para RESPONSE ERROR


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Existing lower-limb, region-specific, patient-reported outcome measures have clinimetric limitations, including limitations in psychometric characteristics (eg, lack of internal consistency, lack of responsiveness, measurement error) and the lack of reported practical and general characteristics. A new patient-reported outcome measure, the Lower Limb Functional Index (LLFI), was developed to address these limitations. Objective The purpose of this study was to overcome recognized deficiencies in existing lower-limb, region-specific, patient-reported outcome measures through: (1) development of a new lower-extremity outcome scale (ie, the LLFI) and (2) evaluation of the clinimetric properties of the LLFI using the Lower Extremity Functional Scale (LEFS) as a criterion measure. Design This was a prospective observational study. Methods The LLFI was developed in a 3-stage process of: (1) item generation, (2) item reduction with an expert panel, and (3) pilot field testing (n=18) for reliability, responsiveness, and sample size requirements for a larger study. The main study used a convenience sample (n=127) from 10 physical therapy clinics. Participants completed the LLFI and LEFS every 2 weeks for 6 weeks and then every 4 weeks until discharge. Data were used to assess the psychometric, practical, and general characteristics of the LLFI and the LEFS. The characteristics also were evaluated for overall performance using the Measurement of Outcome Measures and Bot clinimetric assessment scales. Results The LLFI and LEFS demonstrated a single-factor structure, comparable reliability (intraclass correlation coefficient [2,1]=.97), scale width, and high criterion validity (Pearson r=.88, with 95% confidence interval [CI]). Clinimetric performance was higher for the LLFI compared with the LEFS on the Measurement of Outcome Measures scale (96% and 95%, respectively) and the Bot scale (100% and 83%, respectively). The LLFI, compared with the LEFS, had improved responsiveness (standardized response mean=1.75 and 1.64, respectively), minimal detectable change with 90% CI (6.6% and 8.1%, respectively), and internal consistency (α=.91 and .95, respectively), as well as readability with reduced user error and completion and scoring times. Limitations Limitations of the study were that only participants recruited from outpatient physical therapy clinics were included and that no specific conditions or diagnostic subgroups were investigated. Conclusion The LLFI demonstrated sound clinimetric properties. There was lower response error, efficient completion and scoring, and improved responsiveness and overall performance compared with the LEFS. The LLFI is suitable for assessment of lower-limb function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Study purpose. Genetic advances are significantly impacting healthcare, yet recent studies of ethnic group participation in genetic services demonstrate low utilization rates by Latinos. Limited genetic knowledge is a major barrier. The purpose of this study was to field test items in a Spanish-language instrument that will be used to measure genetic knowledge relevant to type 2 diabetes among members of the ethnically heterogeneous U.S. Latino community. Accurate genetic knowledge measurement can provide the foundation for interventions to enhance genetic service utilization. ^ Design. Three waves of cognitive interviews were conducted in Spanish to field test 44 instrument items Thirty-six Latinos, with 12 persons representative of Mexican, Central and South American, and Cuban heritage participated, including 7 males and 29 females between 22 and 60 years of age; 17 participants had 12 years or less of education. ^ Methods. Text narratives from transcriptions of audiotaped interviews were qualitatively analyzed using a coding strategy to indicate potential sources of response error. Through an iterative process of instrument refinement, codes that emerged from the data were used to guide item revisions at the conclusion of each phase; revised items were examined in subsequent interview waves. ^ Results. Inter-cultural and cross-cultural themes associated with difficulties in interpretation and grammatical structuring of items were identified; difficulties associated with comprehension reflected variations in educational level. Of the original 44 items, 32 were retained, 89% of which were revised. Six additional items reflective of cultural knowledge were constructed, resulting in a 38-item instrument. ^ Conclusions. Use of cognitive interviewing provided a valuable tool for detecting both potential sources of response error and cultural variations in these sources. Analysis of interview data guided successive instrument revisions leading to improved item interpretability and comprehension. Although testing in a larger sample will be essential to test validity and reliability, the outcome of field testing suggests initial content validity of a Spanish-language instrument to measure genetic knowledge relative to type 2 diabetes. ^ Keywords. Latinos, genetic knowledge, instrument development, cognitive interviewing ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Epidemiologic studies of mental disorder have called attention to the need for identifying untreated cases and to the inadequacies of the instruments available for this purpose. Accurate case ascertainment devices are the basis of sound epidemiology. Without these, neither case classification nor analytic studies of risk factors is possible.^ The purpose of this research was to examine the reliability and validity of an instrument designed to measure depressive symptoms in community populations--the Center for Epidemiologic Studies Depression Scale (CES-D Scale). Two particular foci of the study were whether or not the scale had the same statistical structure across three ethnic groups and whether or not the magnitude and pattern of rates of symptoms for these groups were affected by one source of response error, that due to response tendencies. The effects of age and education on the pattern and magnitude of rates also were examined. In addition, the reliability and validity of the measures of response tendencies were assessed.^ The study population consisted of residents of Alameda County, California. A stratified sample of approximately 700 whites, blacks and Mexican-Americans was interviewed in the summer and fall of 1978.^ The results of the analysis indicated that the scale was reliable and measured a similar content domain across the three ethnic groups. The unadjusted sex- and ethnic-specific rates of depressive symptoms showed an ethnic pattern for both sexes: rates for whites were lowest, those for Mexican-Americans were highest, and those for blacks were intermediate. Measures of response tendencies--need for social approval, trait desirability, and acquiescence--affected the magnitude of the rates for most comparisons. Likewise, the pattern of rates changed somewhat from that originally observed. The one fairly consistent observation was that rates for Mexican-American women were higher than those for the other two female subgroups in most of the comparisons. These results must be considered in the context of the reliability and validity assessment of the measures of response tendencies which indicated the tenuousness of these measures.^ Age affected the ethnic pattern of rates for men in an inconsistent way; for women, Mexican-Americans continued to have higher rates than whites or blacks in all age categories. Education affected the magnitude of rates for women but not for men. For both men and women, Mexican-Americans had higher rates in all educational strata. Rates for women showed an inverse association with education while those for men did not. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the effects of indenter tip roundness oil the load-depth indentation curves were analyzed using finite element modeling. The tip roundness level was Studied based on the ratio between tip radius and maximum penetration depth (R/h(max)), which varied from 0.02 to 1. The proportional Curvature constant (C), the exponent of depth during loading (alpha), the initial unloading slope (S), the correction factor (beta), the level of piling-up or sinking-in (h(c)/h(max)), and the ratio h(max)/h(f) are shown to be strongly influenced by the ratio R/h(max). The hardness (H) was found to be independent of R/h(max) in the range studied. The Oliver and Pharr method was successful in following the variation of h(c)/h(max) with the ratio R/h(max) through the variation of S with the ratio R/h(max). However, this work confirmed the differences between the hardness values calculated using the Oliver-Pharr method and those obtained directly from finite element calculations; differences which derive from the error in area calculation that Occurs when given combinations of indented material properties are present. The ratio of plastic work to total work (W(p)/W(t)) was found to be independent of the ratio R/h(max), which demonstrates that the methods for the Calculation of mechanical properties based on the *indentation energy are potentially not Susceptible to errors caused by tip roundness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the effects of conical indentation variables on the load-depth indentation curves were analyzed using finite element modeling and dimensional analysis. A factorial design 2(6) was used with the aim of quantifying the effects of the mechanical properties of the indented material and of the indenter geometry. Analysis was based on the input variables Y/E, R/h(max), n, theta, E, and h(max). The dimensional variables E and h(max) were used such that each value of dimensionless Y/E was obtained with two different values of E and each value of dimensionless R/h(max) was obtained with two different h(max) values. A set of dimensionless functions was defined to analyze the effect of the input variables: Pi(1) = P(1)/Eh(2), Pi(2) = h(c)/h, Pi(3) = H/Y, Pi(4) = S/Eh(max), Pi(6) = h(max)/h(f) and Pi(7) = W(P)/W(T). These six functions were found to depend only on the dimensionless variables studied (Y/E, R/h(max), n, theta). Another dimension less function, Pi(5) = beta, was not well defined for most of the dimensionless variables and the only variable that provided a significant effect on beta was theta. However, beta showed a strong dependence on the fraction of the data selected to fit the unloading curve, which means that beta is especially Susceptible to the error in the Calculation of the initial unloading slope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exposure to oxygen may induce a lack of functionality of probiotic dairy foods because the anaerobic metabolism of probiotic bacteria compromises during storage the maintenance of their viability to provide benefits to consumer health. Glucose oxidase can constitute a potential alternative to increase the survival of probiotic bacteria in yogurt because it consumes the oxygen permeating to the inside of the pot during storage, thus making it possible to avoid the use of chemical additives. This research aimed to optimize the processing of probiotic yogurt supplemented with glucose oxidase using response surface methodology and to determine the levels of glucose and glucose oxidase that minimize the concentration of dissolved oxygen and maximize the Bifidobacterium longum count by the desirability function. Response surface methodology mathematical models adequately described the process, with adjusted determination coefficients of 83% for the oxygen and 94% for the B. longum. Linear and quadratic effects of the glucose oxidase were reported for the oxygen model, whereas for the B. longum count model an influence of the glucose oxidase at the linear level was observed followed by the quadratic influence of glucose and quadratic effect of glucose oxidase. The desirability function indicated that 62.32 ppm of glucose oxidase and 4.35 ppm of glucose was the best combination of these components for optimization of probiotic yogurt processing. An additional validation experiment was performed and results showed acceptable error between the predicted and experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One consistent functional imaging finding from patients with major depression has been abnormality of the anterior cingulate cortex (ACC). Hypoperfusion has been most commonly reported, but some studies suggest relative hyperperfusion is associated with response to somatic treatments. Despite these indications of the possible importance of the ACC in depression there have been relatively few cognitive studies ACC function in patients with major depression. The present study employed a series of reaction time (RT) tasks involving selection with melancholic and nonmelancholic depressed patients, as well as age-matched controls. Fifteen patients with unipolar major depression (7 melancholic, 8 nonmelancholic) and 8 healthy age-matched controls performed a series of response selection tasks (choice RT, spatial Stroop, spatial stimulus-response compatibility (SRC), and a combined Stroop + SRC condition). Reaction time and error data were collected. Melancholic patients were significantly slower than controls on all tasks but were slower than nonmelancholic patients only on the Stroop and Stroop + SRC conditions. Nonmelancholic patients did not differ from the control group on any task. The Stroop task seems crucial in differentiating the two depressive groups, they did not differ on the choice RT or SRC tasks. This may reflect differential task demands, the SRC involved symbolic manipulation that might engage the dorsal ACC and dorsolateral prefrontal cortex (DLPFC) to a greater extent than the, primarily inhibitory, Stroop task which may engage the ventral ACC and orbitofrontal cortex (OFC). This might suggest the melancholic group showed a greater ventral ACC-OFC deficit than the nonmelancholic group, while both groups showed similar dorsal ACC-DLPFC deficit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The study evaluates the pre- and post-training lesion localisation ability of a group of novice observers. Parallels are drawn with the performance of inexperienced radiographers taking part in preliminary clinical evaluation (PCE) and ‘red-dot’ systems, operating within radiography practice. Materials and methods - Thirty-four novice observers searched 92 images for simulated lesions. Pre-training and post-training evaluations were completed following the free-response the receiver operating characteristic (FROC) method. Training consisted of observer performance methodology, the characteristics of the simulated lesions and information on lesion frequency. Jackknife alternative FROC (JAFROC) and highest rating inferred ROC analyses were performed to evaluate performance difference on lesion-based and case-based decisions. The significance level of the test was set at 0.05 to control the probability of Type I error. Results - JAFROC analysis (F(3,33) = 26.34, p < 0.0001) and highest-rating inferred ROC analysis (F(3,33) = 10.65, p = 0.0026) revealed a statistically significant difference in lesion detection performance. The JAFROC figure-of-merit was 0.563 (95% CI 0.512,0.614) pre-training and 0.677 (95% CI 0.639,0.715) post-training. Highest rating inferred ROC figure-of-merit was 0.728 (95% CI 0.701,0.755) pre-training and 0.772 (95% CI 0.750,0.793) post-training. Conclusions - This study has demonstrated that novice observer performance can improve significantly. This study design may have relevance in the assessment of inexperienced radiographers taking part in PCE or commenting scheme for trauma.