889 resultados para Meta-analysis, Randomized controlled trials
Resumo:
OBJECTIVE To verify if the type of donor is a risk factor for infection in kidney transplant recipients. METHODS Systematic Review of Literature with Meta-analysis with searches conducted in the databases MEDLINE, LILACS, Embase, Cochrane, Web of Science, SciELO and CINAHL. RESULTS We selected 198 studies and included four observational studies describing infections among patients distinguishing the type of donor. Through meta-analysis, it was shown that in patients undergoing deceased donor transplant, the outcome infection was 2.65 higher, than those who received an organ from a living donor. CONCLUSION The study showed that deceased kidney donor recipients are at an increased risk for developing infections and so the need for establishing and enforcing protocols from proper management of ischemic time to the prevention and control of infection in this population emerges.
Resumo:
BACKGROUND: Maintaining therapeutic concentrations of drugs with a narrow therapeutic window is a complex task. Several computer systems have been designed to help doctors determine optimum drug dosage. Significant improvements in health care could be achieved if computer advice improved health outcomes and could be implemented in routine practice in a cost effective fashion. This is an updated version of an earlier Cochrane systematic review, by Walton et al, published in 2001. OBJECTIVES: To assess whether computerised advice on drug dosage has beneficial effects on the process or outcome of health care. SEARCH STRATEGY: We searched the Cochrane Effective Practice and Organisation of Care Group specialized register (June 1996 to December 2006), MEDLINE (1966 to December 2006), EMBASE (1980 to December 2006), hand searched the journal Therapeutic Drug Monitoring (1979 to March 2007) and the Journal of the American Medical Informatics Association (1996 to March 2007) as well as reference lists from primary articles. SELECTION CRITERIA: Randomized controlled trials, controlled trials, controlled before and after studies and interrupted time series analyses of computerized advice on drug dosage were included. The participants were health professionals responsible for patient care. The outcomes were: any objectively measured change in the behaviour of the health care provider (such as changes in the dose of drug used); any change in the health of patients resulting from computerized advice (such as adverse reactions to drugs). DATA COLLECTION AND ANALYSIS: Two reviewers independently extracted data and assessed study quality. MAIN RESULTS: Twenty-six comparisons (23 articles) were included (as compared to fifteen comparisons in the original review) including a wide range of drugs in inpatient and outpatient settings. Interventions usually targeted doctors although some studies attempted to influence prescriptions by pharmacists and nurses. Although all studies used reliable outcome measures, their quality was generally low. Computerized advice for drug dosage gave significant benefits by:1.increasing the initial dose (standardised mean difference 1.12, 95% CI 0.33 to 1.92)2.increasing serum concentrations (standradised mean difference 1.12, 95% CI 0.43 to 1.82)3.reducing the time to therapeutic stabilisation (standardised mean difference -0.55, 95%CI -1.03 to -0.08)4.reducing the risk of toxic drug level (rate ratio 0.45, 95% CI 0.30 to 0.70)5.reducing the length of hospital stay (standardised mean difference -0.35, 95% CI -0.52 to -0.17). AUTHORS' CONCLUSIONS: This review suggests that computerized advice for drug dosage has some benefits: it increased the initial dose of drug, increased serum drug concentrations and led to a more rapid therapeutic control. It also reduced the risk of toxic drug levels and the length of time spent in the hospital. However, it had no effect on adverse reactions. In addition, there was no evidence to suggest that some decision support technical features (such as its integration into a computer physician order entry system) or aspects of organization of care (such as the setting) could optimise the effect of computerised advice.
Resumo:
Abstract OBJECTIVE Evaluating the evidence of hypertension prevalence among indigenous populations in Brazil through a systematic review and meta-analysis. METHODS A search was performed by two reviewers, with no restriction of date or language in the databases of PubMed, LILACS, SciELO, Virtual Health Library and Capes Journal Portal. Also, a meta-regression model was designed in which the last collection year of each study was used as a moderating variable. RESULTS 23 articles were included in the review. No hypertension was found in indigenous populations in 10 studies, and its prevalence was increasing and varied, reaching levels of up to 29.7%. Combined hypertension prevalence in Indigenous from the period of 1970 to 2014 was 6.2% (95% CI, 3.1% - 10.3%). In the regression, the value of the odds ratio was 1.12 (95% CI, 1.07 - 1.18; p <0.0001), indicating a 12% increase every year in the probability of an indigenous person presenting hypertension. CONCLUSION There has been a constant increase in prevalence despite the absence of hypertension in about half of the studies, probably due to changes in cultural, economic and lifestyle habits, resulting from indigenous interaction with non-indigenous society.
Resumo:
The mathematical representation of Brunswik s lens model has been usedextensively to study human judgment and provides a unique opportunity to conduct ameta-analysis of studies that covers roughly five decades. Specifically, we analyzestatistics of the lens model equation (Tucker, 1964) associated with 259 different taskenvironments obtained from 78 papers. In short, we find on average fairly high levelsof judgmental achievement and note that people can achieve similar levels of cognitiveperformance in both noisy and predictable environments. Although overall performancevaries little between laboratory and field studies, both differ in terms of components ofperformance and types of environments (numbers of cues and redundancy). An analysisof learning studies reveals that the most effective form of feedback is information aboutthe task. We also analyze empirically when bootstrapping is more likely to occur. Weconclude by indicating shortcomings of the kinds of studies conducted to date, limitationsin the lens model methodology, and possibilities for future research.
Resumo:
Objectives: The aim of this study was to evaluate the efficacy of brief motivational intervention (BMI) in reducing alcohol use and related problems among binge drinkers randomly selected from a census of 20 year-old French speaking Swiss men and to test the hypothesis that BMI contributes to maintain low-risk drinking among non-bingers. Methods: Randomized controlled trial comparing the impact of BMI on weekly alcohol use, frequency of binge drinking and occurrence of alcohol-related problems. Setting: Army recruitment center. Participants: A random sample of 622 men were asked to participate, 178 either refused, or missed appointment, or had to follow military assessment procedures instead, resulting in 418 men randomized into BMI or control conditions, 88.7% completing the 6-month follow-up assessment. Intervention: A single face-to-face BMI session exploring alcohol use and related problems in order to stimulate behaviour change perspective in a non-judgmental, empathic manner based on the principles of motivational interviewing (MI). Main outcome measures: Weekly alcohol use, binge drinking frequency and the occurrence of 12 alcohol-related consequences. Results: Among binge drinkers, we observed a 20% change in drinking induced by BMI, with a reduction in weekly drinking of 1.5 drink in the BMI group, compared to an increase of 0.8 drink per week in the control group (incidence rate ratio 0.8, 95% confidence interval 0,66 to 0,98, p = 0.03). BMI did not influence the frequency of binge drinking and the occurrence of 12 possible alcohol-related consequences. However, BMI induced a reduction in the alcohol use of participants who, after drinking over the past 12 months, experienced alcohol-related consequences, i.e., hangover (-20%), missed a class (-53%), got behind at school (-54%), argued with friends (-38%), engaged in unplanned sex (-45%) or did not use protection when having sex (-64%). BMI did not reduce weekly drinking in those who experienced the six other problems screened. Among non-bingers, BMI did not contribute to maintain low-risk drinking. Conclusions: At army conscription, BMI reduced alcohol use in binge drinkers, particularly in those who recently experienced alcohol-related adverse consequences. No preventive effect of BMI was observed among non-bingers. BMI is an interesting preventive option in young binge drinkers, particularly in countries with mandatory army recruitment.
Resumo:
BACKGROUND: Smokers have a lower body weight compared to non-smokers. Smoking cessation is associated with weight gain in most cases. A hormonal mechanism of action might be implicated in weight variations related to smoking, and leptin might be implicated. We made secondary analyses of an RCT, with a hypothesis-free exploratory approach to study the dynamic of leptin following smoking cessation. METHODS: We measured serum leptin levels among 271 sedentary smokers willing to quit who participated in a randomized controlled trial assessing a 9-week moderate-intensity physical activity intervention as an aid for smoking cessation. We adjusted leptin for body fat levels. We performed linear regressions to test for an association between leptin levels and the study group over time. RESULTS: One year after smoking cessation, the mean serum leptin change was +3.23 mg/l (SD 4.89) in the control group and +1.25 mg/l (SD 4.86) in the intervention group (p of the difference < 0.05). When adjusted for body fat levels, leptin was higher in the control group than in the intervention group (p of the difference < 0.01). The mean weight gain was +2.91 (SD 6.66) Kg in the intervention and +3.33 (SD 4.47) Kg in the control groups, respectively (p not significant). CONCLUSIONS: Serum leptin levels significantly increased after smoking cessation, in spite of substantial weight gain. The leptin dynamic might be different in chronic tobacco users who quit smoking, and physical activity might impact the dynamic of leptin in such a situation. CLINICAL TRIAL REGISTRATION NUMBER: NCT00521391.
Resumo:
Therapeutic goal of vitamin D: optimal serum level and dose requirements Results of randomized controlled trials and meta-analyses investigating the effect of vitamin D supplementation on falls and fractures are inconsistent. The optimal serum level 25(OH) vitamin D for musculoskeletal and global health is > or = 30 ng/ml (75 nmol/l) for some experts and 20 ng/ml (50 nmol/l) for some others. A daily dose of vitamin D is better than high intermittent doses to reach this goal. High dose once-yearly vitamin D therapy may increase the incidence of fractures and falls. High serum level of vitamin D is probably harmful for the musculoskeletal system and health at large. The optimal benefits for musculoskeletal health are obtained with an 800 UI daily dose and a serum level of near 30 ng/ml (75 nmol/l).
Resumo:
ABSTRACT: BACKGROUND: The main objective of our study was to assess the impact of a board game on smoking status and smoking-related variables in current smokers. To accomplish this objective, we conducted a randomized controlled trial comparing the game group with a psychoeducation group and a waiting-list control group. METHODS: The following measures were performed at participant inclusion, as well as after a 2-week and a 3-month follow-up period: "Attitudes Towards Smoking Scale" (ATS-18), "Smoking Self-Efficacy Questionnaire" (SEQ-12), "Attitudes Towards Nicotine Replacement Therapy" scale (ANRT-12), number of cigarettes smoked per day, stages of change, quit attempts, and smoking status. Furthermore, participants were assessed for concurrent psychiatric disorders and for the severity of nicotine dependence with the Fagerström Test for Nicotine Dependence (FTND). RESULTS: A time × group effect was observed for subscales of the ANRT-12, ATS-18 and SEQ-12, as well as for the number of cigarettes smoked per day. At three months follow-up, compared to the participants allocated to the waiting list group, those on Pick-Klop group were less likely to remain smoker.Outcomes at 3 months were not predicted by gender, age, FTND, stage of change, or psychiatric disorders at inclusion. CONCLUSIONS: The board game seems to be a good option for smokers. The game led to improvements in variables known to predict quitting in smokers. Furthermore, it increased smoking-cessation rates at 3-months follow-up. The game is also an interesting alternative for smokers in the precontemplation stage.
Resumo:
BACKGROUND: Psychological stress negatively influences food intake and food choices, thereby contributing to the development of childhood obesity. Physical activity can also moderate eating behavior and influence calorie intake. However, it is unknown if acute physical activity influences food intake and overall energy balance after acute stress exposure in children. We therefore investigated the impact of acute physical activity on overall energy balance (food intake minus energy expenditure), food intake, and choice in the setting of acute social stress in normal weight (NW) and overweight/obese (OW/OB) children as well as the impact of psychological risk factors. METHOD: After receiving written consent from their parents, 26 NW (BMI < 90(th) percentile) and 24 7-to 11-year-old OW (n = 5)/OB (n = 19, BMI ≥ 90(th) percentile) children were randomly allocated using computer-generated numbers (1:1, after stratification for weight status) to acute moderate physical or to sedentary activity for 30 min. Afterwards, all children were exposed to an acute social stressor. Children and their parents completed self-report questionnaires. At the end of the stressor, children were allowed to eat freely from a range of 12 different foods (6 sweet/6 salty; each of low/high caloric density). Energy balance, food intake/choice and obesity-related psychological risk factors were assessed. RESULTS: Lower overall energy balance (p = 0.019) and a decreased choice of low density salty foods (p < 0.001) in NW children compared with OW/OB children was found after acute moderate physical activity but not sedentary activity. Independent of their allocation, OW/OB children ate more high density salty foods (104 kcal (34 to 173), p = 0.004) following stress. They scored higher on impulsive behavior (p = 0.005), restrained eating (p < 0.001) and parental corporal punishment (p = 0.03), but these psychological factors were not related to stress-induced food intake/choice. Positive parenting tended to be related to lower intake of sweet high density food (-132 kcal, -277 to 2, p = 0.054). CONCLUSIONS: In the setting of stress, acute moderate physical activity can address energy balance in children, a benefit which is especially pronounced in the OW/OB. Positive parenting may act as a protective factor preventing stress-induced eating of comfort food. TRIAL REGISTRATION: clinicaltrials.gov NCT01693926 The study was a pilot study of a project funded by the Swiss National Science Foundation (CRSII3_147673).
Resumo:
ABSTRACT: Iron deficiency without anemia (IDWA) is related to adverse symptoms that can be relieved by supplementation. Since a blood donation can induce such an iron deficiency, we investigated the clinical impact of an iron treatment after blood donation. METHODS: One week after donation, we randomly assigned 154 female donors with IDWA aged <50 years to a 4-week oral treatment of ferrous sulfate vs. placebo. The main outcome was the change in the level of fatigue before and after the intervention. Also evaluated were aerobic capacity, mood disorder, quality of life, compliance and adverse events. Biological markers were hemoglobin and ferritin. RESULTS: Treatment effect from baseline to 4 weeks for hemoglobin and ferritin were 5.2 g/L (p < 0.01) and 14.8 ng/mL (p < 0.01) respectively. No significant clinical effect was observed for fatigue (-0.15 points, 95% confidence interval -0.9 to 0.6, p = 0.697) or for other outcomes. Compliance and interruption for side effects was similar in both groups. Additionally, blood donation did not induce overt symptoms of fatigue in spite of the significant biological changes it produces. CONCLUSIONS: These data are valuable as they enable us to conclude that donors with IDWA after a blood donation would not clinically benefit from iron supplementation. Trial registration: NCT00689793.
Resumo:
BACKGROUND: Prognosis prediction for resected primary colon cancer is based on the T-stage Node Metastasis (TNM) staging system. We investigated if four well-documented gene expression risk scores can improve patient stratification. METHODS: Microarray-based versions of risk-scores were applied to a large independent cohort of 688 stage II/III tumors from the PETACC-3 trial. Prognostic value for relapse-free survival (RFS), survival after relapse (SAR), and overall survival (OS) was assessed by regression analysis. To assess improvement over a reference, prognostic model was assessed with the area under curve (AUC) of receiver operating characteristic (ROC) curves. All statistical tests were two-sided, except the AUC increase. RESULTS: All four risk scores (RSs) showed a statistically significant association (single-test, P < .0167) with OS or RFS in univariate models, but with HRs below 1.38 per interquartile range. Three scores were predictors of shorter RFS, one of shorter SAR. Each RS could only marginally improve an RFS or OS model with the known factors T-stage, N-stage, and microsatellite instability (MSI) status (AUC gains < 0.025 units). The pairwise interscore discordance was never high (maximal Spearman correlation = 0.563) A combined score showed a trend to higher prognostic value and higher AUC increase for OS (HR = 1.74, 95% confidence interval [CI] = 1.44 to 2.10, P < .001, AUC from 0.6918 to 0.7321) and RFS (HR = 1.56, 95% CI = 1.33 to 1.84, P < .001, AUC from 0.6723 to 0.6945) than any single score. CONCLUSIONS: The four tested gene expression-based risk scores provide prognostic information but contribute only marginally to improving models based on established risk factors. A combination of the risk scores might provide more robust information. Predictors of RFS and SAR might need to be different.