850 resultados para relative risk
Resumo:
Background: Medication-related problems are common in the growing population of older adults and inappropriate prescribing is a preventable risk factor. Explicit criteria such as the Beers criteria provide a valid instrument for describing the rate of inappropriate medication (IM) prescriptions among older adults. Objective: To reduce IM prescriptions based on explicit Beers criteria using a nurse-led intervention in a nursing-home (NH) setting. Study Design: The pre/post-design included IM assessment at study start (pre-intervention), a 4-month intervention period, IM assessment after the intervention period (post-intervention) and a further IM assessment at 1-year follow-up. Setting: 204-bed inpatient NH in Bern, Switzerland. Participants: NH residents aged ≥60 years. Intervention: The intervention included four key intervention elements: (i) adaptation of Beers criteria to the Swiss setting; (ii) IM identification; (iii) IM discontinuation; and (iv) staff training. Main Outcome Measure: IM prescription at study start, after the 4-month intervention period and at 1-year follow-up. Results: The mean±SD resident age was 80.3±8.8 years. Residents were prescribed a mean±SD 7.8±4.0 medications. The prescription rate of IMs decreased from 14.5% pre-intervention to 2.8% post-intervention (relative risk [RR] = 0.2; 95% CI 0.06, 0.5). The risk of IM prescription increased nonstatistically significantly in the 1-year follow-up period compared with post-intervention (RR = 1.6; 95% CI 0.5, 6.1). Conclusions: This intervention to reduce IM prescriptions based on explicit Beers criteria was feasible, easy to implement in an NH setting, and resulted in a substantial decrease in IMs. These results underscore the importance of involving nursing staff in the medication prescription process in a long-term care setting.
Resumo:
Background There is concern that non-inferiority trials might be deliberately designed to conceal that a new treatment is less effective than a standard treatment. In order to test this hypothesis we performed a meta-analysis of non-inferiority trials to assess the average effect of experimental treatments compared with standard treatments. Methods One hundred and seventy non-inferiority treatment trials published in 121 core clinical journals were included. The trials were identified through a search of PubMed (1991 to 20 February 2009). Combined relative risk (RR) from meta-analysis comparing experimental with standard treatments was the main outcome measure. Results The 170 trials contributed a total of 175 independent comparisons of experimental with standard treatments. The combined RR for all 175 comparisons was 0.994 [95% confidence interval (CI) 0.978–1.010] using a random-effects model and 1.002 (95% CI 0.996–1.008) using a fixed-effects model. Of the 175 comparisons, experimental treatment was considered to be non-inferior in 130 (74%). The combined RR for these 130 comparisons was 0.995 (95% CI 0.983–1.006) and the point estimate favoured the experimental treatment in 58% (n = 76) and standard treatment in 42% (n = 54). The median non-inferiority margin (RR) pre-specified by trialists was 1.31 [inter-quartile range (IQR) 1.18–1.59]. Conclusion In this meta-analysis of non-inferiority trials the average RR comparing experimental with standard treatments was close to 1. The experimental treatments that gain a verdict of non-inferiority in published trials do not appear to be systematically less effective than the standard treatments. Importantly, publication bias and bias in the design and reporting of the studies cannot be ruled out and may have skewed the study results in favour of the experimental treatments. Further studies are required to examine the importance of such bias.
Resumo:
Background During acute coronary syndromes patients perceive intense distress. We hypothesized that retrospective ratings of patients' MI-related fear of dying, helplessness, or pain, all assessed within the first year post-MI, are associated with poor cardiovascular outcome. Methods We studied 304 patients (61 ± 11 years, 85% men) who after a median of 52 days (range 12-365 days) after index MI retrospectively rated the level of distress in the form of fear of dying, helplessness, or pain they had perceived at the time of MI on a numeric scale ranging from 0 ("no distress") to 10 ("extreme distress"). Non-fatal hospital readmissions due to cardiovascular disease (CVD) related events (i.e., recurrent MI, elective and non-elective stent implantation, bypass surgery, pacemaker implantation, cerebrovascular incidents) were assessed at follow-up. The relative CVD event risk was computed for a (clinically meaningful) 2-point increase of distress using Cox proportional hazard models. Results During a median follow-up of 32 months (range 16-45), 45 patients (14.8%) experienced a CVD-related event requiring hospital readmission. Greater fear of dying (HR 1.21, 95% CI 1.03-1.43), helplessness (HR 1.22, 95% CI 1.04-1.44), or pain (HR 1.27, 95% CI 1.02-1.58) were significantly associated with an increased CVD risk without adjustment for covariates. A similarly increased relative risk emerged in patients with an unscheduled CVD-related hospital readmission, i.e., when excluding patients with elective stenting (fear of dying: HR 1.26, 95% CI 1.05-1.51; helplessness: 1.26, 95% CI 1.05-1.52; pain: HR 1.30, 95% CI 1.01-1.66). In the fully-adjusted models controlling for age, the number of diseased coronary vessels, hypertension, and smoking, HRs were 1.24 (95% CI 1.04-1.46) for fear of dying, 1.26 (95% CI 1.06-1.50) for helplessness, and 1.26 (95% CI 1.01-1.57) for pain. Conclusions Retrospectively perceived MI-related distress in the form of fear of dying, helplessness, or pain was associated with non-fatal cardiovascular outcome independent of other important prognostic factors.
Resumo:
Background. Measles control may be more challenging in regions with a high prevalence of HIV infection. HIV-infected children are likely to derive particular benefit from measles vaccines because of an increased risk of severe illness. However, HIV infection can impair vaccine effectiveness and may increase the risk of serious adverse events after receipt of live vaccines. We conducted a systematic review to assess the safety and immunogenicity of measles vaccine in HIV-infected children. Methods. The authors searched 8 databases through 12 February 2009 and reference lists. Study selection and data extraction were conducted in duplicate. Meta-analysis was conducted when appropriate. Results. Thirty-nine studies published from 1987 through 2008 were included. In 19 studies with information about measles vaccine safety, more than half reported no serious adverse events. Among HIV-infected children, 59% (95% confidence intervals [CI], 46–71%) were seropositive after receiving standard-titer measles vaccine at 6 months (1 study), comparable to the proportion of seropositive HIV-infected children vaccinated at 9 (8 studies) and 12 months (10 studies). Among HIV-exposed but uninfected and HIV-unexposed children, the proportion of seropositive children increased with increasing age at vaccination. Fewer HIV-infected children were protected after vaccination at 12 months than HIV-exposed but uninfected children (relative risk, 0.61; 95% CI, .50–.73). Conclusions. Measles vaccines appear to be safe in HIV-infected children, but the evidence is limited. When the burden of measles is high, measles vaccination at 6 months of age is likely to benefit children of HIV-infected women, regardless of the child's HIV infection status.
Resumo:
BACKGROUND Current guidelines give recommendations for preferred combination antiretroviral therapy (cART). We investigated factors influencing the choice of initial cART in clinical practice and its outcome. METHODS We analyzed treatment-naive adults with human immunodeficiency virus (HIV) infection participating in the Swiss HIV Cohort Study and starting cART from January 1, 2005, through December 31, 2009. The primary end point was the choice of the initial antiretroviral regimen. Secondary end points were virologic suppression, the increase in CD4 cell counts from baseline, and treatment modification within 12 months after starting treatment. RESULTS A total of 1957 patients were analyzed. Tenofovir-emtricitabine (TDF-FTC)-efavirenz was the most frequently prescribed cART (29.9%), followed by TDF-FTC-lopinavir/r (16.9%), TDF-FTC-atazanavir/r (12.9%), zidovudine-lamivudine (ZDV-3TC)-lopinavir/r (12.8%), and abacavir/lamivudine (ABC-3TC)-efavirenz (5.7%). Differences in prescription were noted among different Swiss HIV Cohort Study sites (P < .001). In multivariate analysis, compared with TDF-FTC-efavirenz, starting TDF-FTC-lopinavir/r was associated with prior AIDS (relative risk ratio, 2.78; 95% CI, 1.78-4.35), HIV-RNA greater than 100 000 copies/mL (1.53; 1.07-2.18), and CD4 greater than 350 cells/μL (1.67; 1.04-2.70); TDF-FTC-atazanavir/r with a depressive disorder (1.77; 1.04-3.01), HIV-RNA greater than 100 000 copies/mL (1.54; 1.05-2.25), and an opiate substitution program (2.76; 1.09-7.00); and ZDV-3TC-lopinavir/r with female sex (3.89; 2.39-6.31) and CD4 cell counts greater than 350 cells/μL (4.50; 2.58-7.86). At 12 months, 1715 patients (87.6%) achieved viral load less than 50 copies/mL and CD4 cell counts increased by a median (interquartile range) of 173 (89-269) cells/μL. Virologic suppression was more likely with TDF-FTC-efavirenz, and CD4 increase was higher with ZDV-3TC-lopinavir/r. No differences in outcome were observed among Swiss HIV Cohort Study sites. CONCLUSIONS Large differences in prescription but not in outcome were observed among study sites. A trend toward individualized cART was noted suggesting that initial cART is significantly influenced by physician's preference and patient characteristics. Our study highlights the need for evidence-based data for determining the best initial regimen for different HIV-infected persons.
Resumo:
Lifestyle changes should be considered before anything else in patients with dyslipidemia according to the new guidelines on dyslipidemias of the European Society of Cardiology (ESC) and the European Atherosclerosis Society (EAS). The guidelines recommend the SCORE system (Systematic Coronary Risk Estimation) to classify cardiovascular risk into four categories (very high, high, medium or low risk) as the basis for treatment decisions. HDL cholesterol, which is inversely proportional to cardiovascular risk, is included to the total risk estimation. In addition to calculating absolute risk, the guidelines contain a table with the relative risk, which could be useful in young patients with a low absolute risk, but high risk compared to individuals of the same age group.
Resumo:
Falsely high ankle-brachial index (ABI) values are associated with an adverse clinical outcome in diabetes mellitus. The aim of the present study was to verify whether such an association also exists in patients with chronic critical limb ischemia (CLI) with and without diabetes. A total of 229 patients (74 +/- 11 years, 136 males, 244 limbs with CLI) were followed for 262 +/- 136 days. Incompressibility of lower limb arteries (ABI > 1.3) was found in 45 patients, and was associated with diabetes mellitus (p = 0.01) and renal insufficiency (p = 0.035). Limbs with incompressible ankle arteries had a higher rate of major amputation (p = 0.002 by log-rank). This association was confirmed by multivariate Cox regression analysis (relative risk [RR] 2.67; 95% CI 1.27-5.64, p = 0.01). The relationship between ABI > 1.3 and amputation rate persisted after subjects with diabetes and renal insufficiency had been removed from the analysis (RR 3.85; 95% CI 1.25-11.79, p = 0.018). Dividing limbs with measurable ankle pressure according to tertiles of ABI, the group in the second tertile (0.323 < or = ABI < or = 0.469) had the lowest amputation rate (4/64, 6.2%), and a U-shaped association between the occurrence of major amputation and ABI was evident. No association was found between ABI and mortality. In conclusion, this study demonstrates that falsely high ABI is an independent predictor of major amputation in patients with CLI.
Resumo:
BACKGROUND: Rotaviruses (RV) are the most common cause of dehydrating gastroenteritis requiring hospitalisation in children <5 years of age. A new generation of safe and effective RV vaccines is available. Accurate data describing the current burden of RV disease in the community are needed to devise appropriate strategies for vaccine usage. METHODS: Retrospective, population-based analysis of RV hospitalisations in children <5 years of age during a 5-year period (1999-2003) in a both urban and rural area inhabited by 12% of the Swiss population. RESULTS: Of 406 evaluable cases, 328 were community-acquired RV infections in children <5 years of age. RV accounted for 38% of all hospitalisations for gastroenteritis. The overall hospitalisation incidence in the <5-year-old was 1.5/1000 child-years (peak incidence, 2.6/1000 child-years in children aged 13-24 months). The incidence of community-acquired RV hospitalisations was significantly greater in children of non-Swiss origin (3.0 vs. 1.1/1000 child-years, relative risk 2.7; 95% CI 2.2-3.4), who were younger, but tended to be less severely dehydrated on admission than Swiss children. In comparison with children from urban areas, RV hospitalisation incidence was significantly lower among those residing in the remote mountain area (0.71 vs. 1.71/1000 child years, relative risk 2.2, 95% CI 1.6-3.1). CONCLUSION: Population-based RV hospitalisation incidence was low in comparison with other European countries. Significantly greater hospitalisation rates among children living in urban areas and those from non-Swiss families indicate that factors other than the severity of RV-induced dehydration are important driving forces of hospital admission.
Resumo:
INTRODUCTION: Whereas most studies focus on laboratory and clinical research, little is known about the causes of death and risk factors for death in critically ill patients. METHODS: Three thousand seven hundred patients admitted to an adult intensive care unit (ICU) were prospectively evaluated. Study endpoints were to evaluate causes of death and risk factors for death in the ICU, in the hospital after discharge from ICU, and within one year after ICU admission. Causes of death in the ICU were defined according to standard ICU practice, whereas deaths in the hospital and at one year were defined and grouped according to the ICD-10 (International Statistical Classification of Diseases and Related Health Problems) score. Stepwise logistic regression analyses were separately calculated to identify independent risk factors for death during the given time periods. RESULTS: Acute, refractory multiple organ dysfunction syndrome was the most frequent cause of death in the ICU (47%), and central nervous system failure (relative risk [RR] 16.07, 95% confidence interval [CI] 8.3 to 31.4, p < 0.001) and cardiovascular failure (RR 11.83, 95% CI 5.2 to 27.1, p < 0.001) were the two most important risk factors for death in the ICU. Malignant tumour disease and exacerbation of chronic cardiovascular disease were the most frequent causes of death in the hospital (31.3% and 19.4%, respectively) and at one year (33.2% and 16.1%, respectively). CONCLUSION: In this primarily surgical critically ill patient population, acute or chronic multiple organ dysfunction syndrome prevailed over single-organ failure or unexpected cardiac arrest as a cause of death in the ICU. Malignant tumour disease and chronic cardiovascular disease were the most important causes of death after ICU discharge.
Resumo:
OBJECTIVE: Many osteoporosis patients have low 25-hydroxyvitamin D (25OHD) and do not take recommended vitamin D amounts. A single tablet containing both cholecalciferol (vitamin D3) and alendronate would improve vitamin D status concurrently, with a drug shown to reduce fracture risk. This study assessed the efficacy, safety, and tolerability of a once-weekly tablet containing alendronate 70 mg and cholecalciferol 70 microg (2800 IU) (ALN + D) versus alendronate 70 mg alone (ALN). METHODS: This 15-week, randomized, double-blind, multi-center, active-controlled study was conducted during a season when 25OHD levels are declining, and patients were required to avoid sunlight and vitamin D supplements for the duration of the study. Men (n = 35) and postmenopausal women (n = 682) with osteoporosis and 25OHD >or= 9 ng/mL were randomized to ALN + D (n = 360) or ALN (n = 357). MAIN OUTCOME MEASURES: Serum 25OHD, parathyroid hormone, bone-specific alkaline phosphatase (BSAP), and urinary N-telopeptide collagen cross-links (NTX). RESULTS: Serum 25OHD declined from 22.2 to 18.6 ng/mL with ALN (adjusted mean change = -3.4; 95% confidence interval [CI]: -4.0 to -2.8), and increased from 22.1 to 23.1 ng/mL with ALN + D (adjusted mean change = 1.2; 95% CI: 0.6 to 1.8). At 15 weeks, adjusted mean 25OHD was 26% higher (p < 0.001, ALN + D versus ALN), the adjusted relative risk (RR) of 25OHD < 15 ng/mL (primary endpoint) was reduced by 64% (incidence 11% vs. 32%; RR = 0.36; 95% CI: 0.27 to 0.48 [p < 0.001]), and the RR of 25OHD < 9 ng/mL (a secondary endpoint) was reduced by 91% (1% vs. 13%; RR = 0.09; 95% CI: 0.03 to 0.23 [p < 0.001]). Antiresorptive efficacy was unaltered, as measured by reduction in bone turnover (BSAP and NTX). CONCLUSION: In osteoporosis patients who avoided sunlight and vitamin D supplements, this once-weekly tablet containing alendronate and cholecalciferol provided equivalent antiresorptive efficacy, reduced the risk of low serum 25OHD, improved vitamin D status over 15 weeks, and was not associated with hypercalcemia, hypercalciuria or other adverse findings, versus alendronate alone.
Resumo:
CONTEXT: Compared with bare metal stents, sirolimus-eluting and paclitaxel-eluting stents have been shown to markedly improve angiographic and clinical outcomes after percutaneous coronary revascularization, but their performance in the treatment of de novo coronary lesions has not been compared in a prospective multicenter study. OBJECTIVE: To compare the safety and efficacy of sirolimus-eluting vs paclitaxel-eluting coronary stents. DESIGN: Prospective, randomized comparative trial (the REALITY trial) conducted between August 2003 and February 2004, with angiographic follow-up at 8 months and clinical follow-up at 12 months. SETTING: Ninety hospitals in Europe, Latin America, and Asia. PATIENTS: A total of 1386 patients (mean age, 62.6 years; 73.1% men; 28.0% with diabetes) with angina pectoris and 1 or 2 de novo lesions (2.25-3.00 mm in diameter) in native coronary arteries. INTERVENTION: Patients were randomly assigned in a 1:1 ratio to receive a sirolimus-eluting stent (n = 701) or a paclitaxel-eluting stent (n = 685). MAIN OUTCOME MEASURES: The primary end point was in-lesion binary restenosis (presence of a more than 50% luminal-diameter stenosis) at 8 months. Secondary end points included 1-year rates of target lesion and vessel revascularization and a composite end point of cardiac death, Q-wave or non-Q-wave myocardial infarction, coronary artery bypass graft surgery, or repeat target lesion revascularization. RESULTS: In-lesion binary restenosis at 8 months occurred in 86 patients (9.6%) with a sirolimus-eluting stent vs 95 (11.1%) with a paclitaxel-eluting stent (relative risk [RR], 0.84; 95% confidence interval [CI], 0.61-1.17; P = .31). For sirolimus- vs paclitaxel-eluting stents, respectively, the mean (SD) in-stent late loss was 0.09 (0.43) mm vs 0.31 (0.44) mm (difference, -0.22 mm; 95% CI, -0.26 to -0.18 mm; P<.001), mean (SD) in-stent diameter stenosis was 23.1% (16.6%) vs 26.7% (15.8%) (difference, -3.60%; 95% CI, -5.12% to -2.08%; P<.001), and the number of major adverse cardiac events at 1 year was 73 (10.7%) vs 76 (11.4%) (RR, 0.94; 95% CI, 0.69-1.27; P = .73). CONCLUSION: In this trial comparing sirolimus- and paclitaxel-eluting coronary stents, there were no differences in the rates of binary restenosis or major adverse cardiac events. CLINICAL TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT00235092.
Resumo:
Background. Subjective memory complaints are common after coronary artery bypass grafting (CABG), but previous studies have concluded that such symptoms are more closely associated with depressed mood than objective cognitive dysfunction. We compared the incidence of self-reported memory symptoms at 3 and 12 months after CABG with that of a control group of patients with comparable risk factors for coronary artery disease but without surgery. Methods. Patients undergoing CABG (n = 140) and a demographically similar nonsurgical control group with coronary artery disease (n = 92) were followed prospectively at 3 and 12 months. At each follow-up time, participants were asked about changes since the previous evaluation in areas of memory, calculations, reading, and personality. A Functional Status Questionnaire (FSQ) and self-report measure of symptoms of depression (CES-D) were also completed. Results. The frequency of self-reported changes in memory, personality, and reading at 3 months was significantly higher among CABG patients than among nonsurgical controls. By contrast, there were no differences in the frequency of self-reported symptoms relating to calculations or overall rating of functional status. After adjusting for a measure of depression (CES-D rating score), the risk for self-reported memory changes remained nearly 5 times higher among the CABG patients than control subjects. The relative risk of developing new self-reported memory symptoms between 3 and 12 months was 2.5 times higher among CABG patients than among nonsurgical controls (CI 1.24 – 5.02), and the overall prevalence of memory symptoms at 12 months was also higher among CABG patients (39%) than controls (14%). Conclusions. The frequency of self-reported memory symptoms 3 and 12 months after baseline is significantly higher among CABG patients than control patients with comparable risk factors for coronary and cerebrovascular disease. These differences could not be accounted for by symptoms of depression. The self-reported cognitive symptoms appear to be relatively specific for memory, and may reflect aspects of memory functioning that are not captured by traditional measures of new verbal learning and memory. The etiology of these self-reported memory symptoms remains unclear, but our findings as well as those of others, may implicate factors other than cardiopulmonary bypass itself.
Resumo:
OBJECTIVE: To compare the effectiveness and safety of intraarticular high-molecular hylan with standard preparations of hyaluronic acids in osteoarthritis of the knee. METHODS: We performed a systematic review and meta-analysis of randomized controlled trials comparing hylan with a hyaluronic acid in patients with knee osteoarthritis. Trials were identified by systematic searches of Central, Medline, EMBase, Cinahl, the Food and Drug Administration, and Science Citation Index supplemented by hand searches of conference proceedings and reference lists (last update November 2006). Literature screening and data extraction were performed in duplicate. Effect sizes were calculated from differences in means of pain-related outcomes between treatment and control groups at the end of the trial, divided by the pooled standard deviation. Trials were combined using random-effects meta-analysis. RESULTS: Thirteen trials with a pooled total of 2,085 patients contributed to the meta-analysis. The pooled effect size was -0.27 (95% confidence interval [95% CI] -0.55, 0.01), favoring hylan, but between-trial heterogeneity was high (I(2) = 88%). Trials with blinded patients, adequate concealment of allocation, and an intent-to-treat analysis had pooled effect sizes near null. The meta-analyses on safety revealed an increased risk associated with hylan for any local adverse events (relative risk [RR] 1.91; 95% CI 1.04, 3.49; I(2) = 28%) and for flares (RR 2.04; 95% CI 1.18, 3.53; I(2) = 0%). CONCLUSION: Given the likely lack of a superior effectiveness of hylan over hyaluronic acids and the increased risk of local adverse events associated with hylan, we discourage the use of intraarticular hylan in patients with knee osteoarthritis in clinical research or practice.
Resumo:
BACKGROUND: Previous meta-analyses described moderate to large benefits of chondroitin in patients with osteoarthritis. However, recent large-scale trials did not find evidence of an effect. PURPOSE: To determine the effects of chondroitin on pain in patients with osteoarthritis. DATA SOURCES: The authors searched the Cochrane Central Register of Controlled Trials (1970 to 2006), MEDLINE (1966 to 2006), EMBASE (1980 to 2006), CINAHL (1970 to 2006), and conference proceedings; checked reference lists; and contacted authors. The last update of searches was performed on 30 November 2006. STUDY SELECTION: Studies were included if they were randomized or quasi-randomized, controlled trials that compared chondroitin with placebo or with no treatment in patients with osteoarthritis of the knee or hip. There were no language restrictions. DATA EXTRACTION: The authors extracted data in duplicate. Effect sizes were calculated from the differences in means of pain-related outcomes between treatment and control groups at the end of the trial, divided by the pooled SD. Trials were combined by using random-effects meta-analysis. DATA SYNTHESIS: 20 trials (3846 patients) contributed to the meta-analysis, which revealed a high degree of heterogeneity among the trials (I2 = 92%). Small trials, trials with unclear concealment of allocation, and trials that were not analyzed according to the intention-to-treat principle showed larger effects in favor of chondroitin than did the remaining trials. When the authors restricted the analysis to the 3 trials with large sample sizes and an intention-to-treat analysis, 40% of patients were included. This resulted in an effect size of -0.03 (95% CI, -0.13 to 0.07; I2 = 0%) and corresponded to a difference of 0.6 mm on a 10-cm visual analogue scale. A meta-analysis of 12 trials showed a pooled relative risk of 0.99 (CI, 0.76 to 1.31) for any adverse event. LIMITATIONS: For 9 trials, the authors had to use approximations to calculate effect sizes. Trial quality was generally low, heterogeneity among the trials made initial interpretation of results difficult, and exploring sources of heterogeneity in meta-regression and stratified analyses may be unreliable. CONCLUSIONS: Large-scale, methodologically sound trials indicate that the symptomatic benefit of chondroitin is minimal or nonexistent. Use of chondroitin in routine clinical practice should therefore be discouraged.
Resumo:
BACKGROUND: Abstracts of presentations at scientific meetings are usually available only in conference proceedings. If subsequent full publication of abstract results is based on the magnitude or direction of study results, publication bias may result. Publication bias, in turn, creates problems for those conducting systematic reviews or relying on the published literature for evidence. OBJECTIVES: To determine the rate at which abstract results are subsequently published in full, and the time between meeting presentation and full publication. To assess the association between study characteristics and full publication. SEARCH STRATEGY: We searched MEDLINE, EMBASE, The Cochrane Library, Science Citation Index, reference lists, and author files. Date of most recent search: June 2003. SELECTION CRITERIA: We included all reports that examined the subsequent full publication rate of biomedical results initially presented as abstracts or in summary form. Follow-up of abstracts had to be at least two years. DATA COLLECTION AND ANALYSIS: Two reviewers extracted data. We calculated the weighted mean full publication rate and time to full publication. Dichotomous variables were analyzed using relative risk and random effects models. We assessed time to publication using Kaplan-Meier survival analyses. MAIN RESULTS: Combining data from 79 reports (29,729 abstracts) resulted in a weighted mean full publication rate of 44.5% (95% confidence interval (CI) 43.9 to 45.1). Survival analyses resulted in an estimated publication rate at 9 years of 52.6% for all studies, 63.1% for randomized or controlled clinical trials, and 49.3% for other types of study designs.'Positive' results defined as any 'significant' result showed an association with full publication (RR = 1.30; CI 1.14 to 1.47), as did 'positive' results defined as a result favoring the experimental treatment (RR =1.17; CI 1.02 to 1.35), and 'positive' results emanating from randomized or controlled clinical trials (RR = 1.18, CI 1.07 to 1.30).Other factors associated with full publication include oral presentation (RR = 1.28; CI 1.09 to 1.49); acceptance for meeting presentation (RR = 1.78; CI 1.50 to 2.12); randomized trial study design (RR = 1.24; CI 1.14 to 1.36); and basic research (RR = 0.79; CI 0.70 to 0.89). Higher quality of abstracts describing randomized or controlled clinical trials was also associated with full publication (RR = 1.30, CI 1.00 to 1.71). AUTHORS' CONCLUSIONS: Only 63% of results from abstracts describing randomized or controlled clinical trials are published in full. 'Positive' results were more frequently published than not 'positive' results.