113 resultados para inflation bias
Resumo:
OBJECTIVE: To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome. DESIGN: Combined analysis of data from three meta-epidemiological studies based on collections of meta-analyses. DATA SOURCES: 146 meta-analyses including 1346 trials examining a wide range of interventions and outcomes. MAIN OUTCOME MEASURES: Ratios of odds ratios quantifying the degree of bias associated with inadequate or unclear allocation concealment, and lack of blinding, for trials with different types of intervention and outcome. A ratio of odds ratios <1 implies that inadequately concealed or non-blinded trials exaggerate intervention effect estimates. RESULTS: In trials with subjective outcomes effect estimates were exaggerated when there was inadequate or unclear allocation concealment (ratio of odds ratios 0.69 (95% CI 0.59 to 0.82)) or lack of blinding (0.75 (0.61 to 0.93)). In contrast, there was little evidence of bias in trials with objective outcomes: ratios of odds ratios 0.91 (0.80 to 1.03) for inadequate or unclear allocation concealment and 1.01 (0.92 to 1.10) for lack of blinding. There was little evidence for a difference between trials of drug and non-drug interventions. Except for trials with all cause mortality as the outcome, the magnitude of bias varied between meta-analyses. CONCLUSIONS: The average bias associated with defects in the conduct of randomised trials varies with the type of outcome. Systematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of bias either as the primary analysis or in conjunction with less restrictive analyses.
Resumo:
BACKGROUND: The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. METHODOLOGY/PRINCIPAL FINDINGS: We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. CONCLUSIONS: Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Resumo:
OBJECTIVES: The STAndards for Reporting studies of Diagnostic accuracy (STARD) for investigators and editors and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) for reviewers and readers offer guidelines for the quality and reporting of test accuracy studies. These guidelines address and propose some solutions to two major threats to validity: spectrum bias and test review bias. STUDY DESIGN AND SETTING: Using a clinical example, we demonstrate that these solutions fail and propose an alternative solution that concomitantly addresses both sources of bias. We also derive formulas that prove the generality of our arguments. RESULTS: A logical extension of our ideas is to extend STARD item 23 by adding a requirement for multivariable statistical adjustment using information collected in QUADAS items 1, 2, and 12 and STARD items 3-5, 11, 15, and 18. CONCLUSION: We recommend reporting not only variation of diagnostic accuracy across subgroups (STARD item 23) but also the effects of the multivariable adjustments on test performance. We also suggest that the QUADAS be supplemented by an item addressing the appropriateness of statistical methods, in particular whether multivariable adjustments have been included in the analysis.