995 resultados para conclusions bias


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE The link between CNS penetration of antiretrovirals and AIDS-defining neurologic disorders remains largely unknown.METHODS: HIV-infected, antiretroviral therapy-naive individuals in the HIV-CAUSAL Collaboration who started an antiretroviral regimen were classified according to the CNS Penetration Effectiveness (CPE) score of their initial regimen into low (<8), medium (8-9), or high (>9) CPE score. We estimated "intention-to-treat" hazard ratios of 4 neuroAIDS conditions for baseline regimens with high and medium CPE scores compared with regimens with a low score. We used inverse probability weighting to adjust for potential bias due to infrequent follow-up.RESULTS: A total of 61,938 individuals were followed for a median (interquartile range) of 37 (18, 70) months. During follow-up, there were 235 cases of HIV dementia, 169 cases of toxoplasmosis, 128 cases of cryptococcal meningitis, and 141 cases of progressive multifocal leukoencephalopathy. The hazard ratio (95% confidence interval) for initiating a combined antiretroviral therapy regimen with a high vs low CPE score was 1.74 (1.15, 2.65) for HIV dementia, 0.90 (0.50, 1.62) for toxoplasmosis, 1.13 (0.61, 2.11) for cryptococcal meningitis, and 1.32 (0.71, 2.47) for progressive multifocal leukoencephalopathy. The respective hazard ratios (95% confidence intervals) for a medium vs low CPE score were 1.01 (0.73, 1.39), 0.80 (0.56, 1.15), 1.08 (0.73, 1.62), and 1.08 (0.73, 1.58).CONCLUSIONS: We estimated that initiation of a combined antiretroviral therapy regimen with a high CPE score increases the risk of HIV dementia, but not of other neuroAIDS conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence = Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident = true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in tree-ring-based quantifications of forest growth and increase the comparability of datasets beyond individual studies, investigators, laboratories, and geographical boundaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social norms pervade almost every aspect of social interaction. If they are violated, not only legal institutions, but other members of society as well, punish, i.e., inflict costs on the wrongdoer. Sanctioning occurs even when the punishers themselves were not harmed directly and even when it is costly for them. There is evidence for intergroup bias in this third-party punishment: third-parties, who share group membership with victims, punish outgroup perpetrators more harshly than ingroup perpetrators. However, it is unknown whether a discriminatory treatment of outgroup perpetrators (outgroup discrimination) or a preferential treatment of ingroup perpetrators (ingroup favoritism) drives this bias. To answer this question, the punishment of outgroup and ingroup perpetrators must be compared to a baseline, i.e., unaffiliated perpetrators. By applying a costly punishment game, we found stronger punishment of outgroup versus unaffiliated perpetrators and weaker punishment of ingroup versus unaffiliated perpetrators. This demonstrates that both ingroup favoritism and outgroup discrimination drive intergroup bias in third-party punishment of perpetrators that belong to distinct social groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Daily we cope with upcoming potentially disadvantageous events. Therefore, it makes sense to be prepared for the worst case. Such a 'pessimistic' bias is reflected in brain activation during emotion processing. Healthy individuals underwent functional neuroimaging while viewing emotional stimuli that were earlier cued ambiguously or unambiguously concerning their emotional valence. Presentation of ambiguously announced pleasant pictures compared with unambiguously announced pleasant pictures resulted in increased activity in the ventrolateral prefrontal, premotor and temporal cortex, and in the caudate nucleus. This was not the case for the respective negative conditions. This indicates that pleasant stimuli after ambiguous cueing provided 'unexpected' emotional input, resulting in the adaptation of brain activity. It strengthens the hypothesis of a 'pessimistic' bias of brain activation toward ambiguous emotional events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND To summarize the available evidence on the effectiveness of psychological interventions for patients with post-traumatic stress disorder (PTSD). METHOD We searched bibliographic databases and reference lists of relevant systematic reviews and meta-analyses for randomized controlled trials that compared specific psychological interventions for adults with PTSD symptoms either head-to-head or against control interventions using non-specific intervention components, or against wait-list control. Two investigators independently extracted the data and assessed trial characteristics. RESULTS The analyses included 4190 patients in 66 trials. An initial network meta-analysis showed large effect sizes (ESs) for all specific psychological interventions (ESs between -1.10 and -1.37) and moderate effects of psychological interventions that were used to control for non-specific intervention effects (ESs -0.58 and -0.62). ES differences between various types of specific psychological interventions were absent to small (ES differences between 0.00 and 0.27). Considerable between-trial heterogeneity occurred (τ 2 = 0.30). Stratified analyses revealed that trials that adhered to DSM-III/IV criteria for PTSD were associated with larger ESs. However, considerable heterogeneity remained. Heterogeneity was reduced in trials with adequate concealment of allocation and in large-sized trials. We found evidence for small-study bias. CONCLUSIONS Our findings show that patients with a formal diagnosis of PTSD and those with subclinical PTSD symptoms benefit from different psychological interventions. We did not identify any intervention that was consistently superior to other specific psychological interventions. However, the robustness of evidence varies considerably between different psychological interventions for PTSD, with most robust evidence for cognitive behavioral and exposure therapies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Osteoarthritis is the most common form of joint disease and the leading cause of pain and physical disability in older people. Opioids may be a viable treatment option if people have severe pain or if other analgesics are contraindicated. However, the evidence about their effectiveness and safety is contradictory. This is an update of a Cochrane review first published in 2009. OBJECTIVES To determine the effects on pain, function, safety, and addiction of oral or transdermal opioids compared with placebo or no intervention in people with knee or hip osteoarthritis. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and CINAHL (up to 28 July 2008, with an update performed on 15 August 2012), checked conference proceedings, reference lists, and contacted authors. SELECTION CRITERIA We included randomised or quasi-randomised controlled trials that compared oral or transdermal opioids with placebo or no treatment in people with knee or hip osteoarthritis. We excluded studies of tramadol. We applied no language restrictions. DATA COLLECTION AND ANALYSIS We extracted data in duplicate. We calculated standardised mean differences (SMDs) and 95% confidence intervals (CI) for pain and function, and risk ratios for safety outcomes. We combined trials using an inverse-variance random-effects meta-analysis. MAIN RESULTS We identified 12 additional trials and included 22 trials with 8275 participants in this update. Oral oxycodone was studied in 10 trials, transdermal buprenorphine and oral tapentadol in four, oral codeine in three, oral morphine and oral oxymorphone in two, and transdermal fentanyl and oral hydromorphone in one trial each. All trials were described as double-blind, but the risk of bias for other domains was unclear in several trials due to incomplete reporting. Opioids were more beneficial in pain reduction than control interventions (SMD -0.28, 95% CI -0.35 to -0.20), which corresponds to a difference in pain scores of 0.7 cm on a 10-cm visual analogue scale (VAS) between opioids and placebo. This corresponds to a difference in improvement of 12% (95% CI 9% to 15%) between opioids (41% mean improvement from baseline) and placebo (29% mean improvement from baseline), which translates into a number needed to treat (NNTB) to cause one additional treatment response on pain of 10 (95% CI 8 to 14). Improvement of function was larger in opioid-treated participants compared with control groups (SMD -0.26, 95% CI -0.35 to -0.17), which corresponds to a difference in function scores of 0.6 units between opioids and placebo on a standardised Western Ontario and McMaster Universities Arthritis Index (WOMAC) disability scale ranging from 0 to 10. This corresponds to a difference in improvement of 11% (95% CI 7% to 14%) between opioids (32% mean improvement from baseline) and placebo (21% mean improvement from baseline), which translates into an NNTB to cause one additional treatment response on function of 11 (95% CI 7 to 14). We did not find substantial differences in effects according to type of opioid, analgesic potency, route of administration, daily dose, methodological quality of trials, and type of funding. Trials with treatment durations of four weeks or less showed larger pain relief than trials with longer treatment duration (P value for interaction = 0.001) and there was evidence for funnel plot asymmetry (P value = 0.054 for pain and P value = 0.011 for function). Adverse events were more frequent in participants receiving opioids compared with control. The pooled risk ratio was 1.49 (95% CI 1.35 to 1.63) for any adverse event (9 trials; 22% of participants in opioid and 15% of participants in control treatment experienced side effects), 3.76 (95% CI 2.93 to 4.82) for drop-outs due to adverse events (19 trials; 6.4% of participants in opioid and 1.7% of participants in control treatment dropped out due to adverse events), and 3.35 (95% CI 0.83 to 13.56) for serious adverse events (2 trials; 1.3% of participants in opioid and 0.4% of participants in control treatment experienced serious adverse events). Withdrawal symptoms occurred more often in opioid compared with control treatment (odds ratio (OR) 2.76, 95% CI 2.02 to 3.77; 3 trials; 2.4% of participants in opioid and 0.9% of participants control treatment experienced withdrawal symptoms). AUTHORS' CONCLUSIONS The small mean benefit of non-tramadol opioids are contrasted by significant increases in the risk of adverse events. For the pain outcome in particular, observed effects were of questionable clinical relevance since the 95% CI did not include the minimal clinically important difference of 0.37 SMDs, which corresponds to 0.9 cm on a 10-cm VAS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Venous thromboembolism (VTE) often complicates the clinical course of cancer. The risk is further increased by chemotherapy, but the safety and efficacy of primary thromboprophylaxis in cancer patients treated with chemotherapy is uncertain. This is an update of a review first published in February 2012. OBJECTIVES To assess the efficacy and safety of primary thromboprophylaxis for VTE in ambulatory cancer patients receiving chemotherapy compared with placebo or no thromboprophylaxis. SEARCH METHODS For this update, the Cochrane Peripheral Vascular Diseases Group Trials Search Co-ordinator searched the Specialised Register (last searched May 2013), CENTRAL (2013, Issue 5), and clinical trials registries (up to June 2013). SELECTION CRITERIA Randomised controlled trials (RCTs) comparing any oral or parenteral anticoagulant or mechanical intervention to no intervention or placebo, or comparing two different anticoagulants. DATA COLLECTION AND ANALYSIS Data were extracted on methodological quality, patients, interventions, and outcomes including symptomatic VTE and major bleeding as the primary effectiveness and safety outcomes, respectively. MAIN RESULTS We identified 12 additional RCTs (6323 patients) in the updated search so that this update considered 21 trials with a total of 9861 patients, all evaluating pharmacological interventions and performed mainly in patients with advanced cancer. Overall, the risk of bias varied from low to high. One large trial of 3212 patients found a 64% (risk ratio (RR) 0.36, 95% confidence interval (CI) 0.22 to 0.60) reduction of symptomatic VTE with the ultra-low molecular weight heparin (uLMWH) semuloparin relative to placebo, with no apparent difference in major bleeding (RR 1.05, 95% CI 0.55 to 2.00). LMWH, when compared with inactive control, significantly reduced the incidence of symptomatic VTE (RR 0.53, 95% CI 0.38 to 0.75; no heterogeneity, Tau(2) = 0%) with similar rates of major bleeding events (RR 1.30, 95% CI 0.75 to 2.23). In patients with multiple myeloma, LMWH was associated with a significant reduction in symptomatic VTE when compared with the vitamin K antagonist warfarin (RR 0.33, 95% CI 0.14 to 0.83), while the difference between LMWH and aspirin was not statistically significant (RR 0.51, 95% CI 0.22 to 1.17). No major bleeding was observed in the patients treated with LMWH or warfarin and in less than 1% of those treated with aspirin. Only one study evaluated unfractionated heparin against inactive control and found an incidence of major bleeding of 1% in both study groups while not reporting on VTE. When compared with placebo, warfarin was associated with a statistically insignificant reduction of symptomatic VTE (RR 0.15, 95% CI 0.02 to 1.20). Antithrombin, evaluated in one study involving paediatric patients, had no significant effect on VTE nor major bleeding when compared with inactive control. The new oral factor Xa inhibitor apixaban was evaluated in a phase-II dose finding study that suggested a promising low rate of major bleeding (2.1% versus 3.3%) and symptomatic VTE (1.1% versus 10%) in comparison with placebo. AUTHORS' CONCLUSIONS In this update, we confirmed that primary thromboprophylaxis with LMWH significantly reduced the incidence of symptomatic VTE in ambulatory cancer patients treated with chemotherapy. In addition, the uLMWH semuloparin significantly reduced the incidence of symptomatic VTE. However, the broad confidence intervals around the estimates for major bleeding suggest caution in the use of anticoagulation and mandate additional studies to determine the risk to benefit ratio of anticoagulants in this setting. Despite the encouraging results of this review, routine prophylaxis in ambulatory cancer patients cannot be recommended before safety issues are adequately addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Accurate information about the prevalence of Chlamydia trachomatis is needed to assess national prevention and control measures. Methods: We systematically reviewed population-based cross-sectional studies that estimated chlamydia prevalence in European Union/European Economic Area (EU/EEA) Member States and non-European high income countries from January 1990 to August 2012. We examined results in forest plots, explored heterogeneity using the I2 statistic, and conducted random effects meta-analysis if appropriate. Metaregression was used to examine the relationship between study characteristics and chlamydia prevalence estimates. Results: We included 25 population-based studies from 11 EU/EEA countries and 14 studies from five other high income countries. Four EU/EEA Member States reported on nationally representative surveys of sexually experienced adults aged 18-26 years (response rates 52-71%). In women, chlamydia point prevalence estimates ranged from 3.0-5.3%; the pooled average of these estimates was 3.6% (95% CI 2.4, 4.8, I2 0%). In men, estimates ranged from 2.4-7.3% (pooled average 3.5%; 95% CI 1.9, 5.2, I2 27%). Estimates in EU/EEA Member States were statistically consistent with those in other high income countries (I2 0% for women, 6% for men). There was statistical evidence of an association between survey response rate and estimated chlamydia prevalence; estimates were higher in surveys with lower response rates, (p=0.003 in women, 0.018 in men). Conclusions: Population-based surveys that estimate chlamydia prevalence are at risk of participation bias owing to low response rates. Estimates obtained in nationally representative samples of the general population of EU/EEA Member States are similar to estimates from other high income countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES To assess the available evidence on the effectiveness of accelerated orthodontic tooth movement through surgical and non-surgical approaches in orthodontic patients. METHODS Randomized controlled trials and controlled clinical trials were identified through electronic and hand searches (last update: March 2014). Orthognathic surgery, distraction osteogenesis, and pharmacological approaches were excluded. Risk of bias was assessed using the Cochrane risk of bias tool. RESULTS Eighteen trials involving 354 participants were included for qualitative and quantitative synthesis. Eight trials reported on low-intensity laser, one on photobiomodulation, one on pulsed electromagnetic fields, seven on corticotomy, and one on interseptal bone reduction. Two studies on corticotomy and two on low-intensity laser, which had low or unclear risk of bias, were mathematically combined using the random effects model. Higher canine retraction rate was evident with corticotomy during the first month of therapy (WMD=0.73; 95% CI: 0.28, 1.19, p<0.01) and with low-intensity laser (WMD=0.42mm/month; 95% CI: 0.26, 0.57, p<0.001) in a period longer than 3 months. The quality of evidence supporting the interventions is moderate for laser therapy and low for corticotomy intervention. CONCLUSIONS There is some evidence that low laser therapy and corticotomy are effective, whereas the evidence is weak for interseptal bone reduction and very weak for photobiomodulation and pulsed electromagnetic fields. Overall, the results should be interpreted with caution given the small number, quality, and heterogeneity of the included studies. Further research is required in this field with additional attention to application protocols, adverse effects, and cost-benefit analysis. CLINICAL SIGNIFICANCE From the qualitative and quantitative synthesis of the studies, it could be concluded that there is some evidence that low laser therapy and corticotomy are associated with accelerated orthodontic tooth movement, while further investigation is required before routine application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION Empirical evidence has indicated that only a subsample of studies conducted reach full-text publication and this phenomenon has become known as publication bias. A form of publication bias is the selectively delayed full publication of conference abstracts. The objective of this article was to examine the publication status of oral abstracts and poster-presentation abstracts, included in the scientific program of the 82nd and 83rd European Orthodontic Society (EOS) congresses, held in 2006 and 2007, and to identify factors associated with full-length publication. METHODS A systematic search of PubMed and Google Scholar databases was performed in April 2013 using author names and keywords from the abstract title to locate abstract and full-article publications. Information regarding mode of presentation, type of affiliation, geographical origin, statistical results, and publication details were collected and analyzed using univariable and multivariable logistic regression. RESULTS Approximately 51 per cent of the EOS 2006 and 55 per cent of the EOS 2007 abstracts appeared in print more than 5 years post congress. A mean period of 1.32 years elapsed between conference and publication date. Mode of presentation (oral or poster), use of statistical analysis, and research subject area were significant predictors for publication success. LIMITATIONS Inherent discrepancies of abstract reporting, mainly related to presentation of preliminary results and incomplete description of methods, may be considered in analogous studies. CONCLUSIONS On average 52.2 per cent of the abstracts presented at the two EOS conferences reached full publication. Abstracts presented orally, including statistical analysis, were more likely to get published.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Ultrasonographic appearance of the gastrointestinal (GI) tract of equine neonates has not been completely described. OBJECTIVES To describe (1) sonographic characteristics of the GI segments in normal nonsedated equine neonates, (2) intra- and interobserver variation in wall thickness, and (3) the sonographic appearance of asymptomatic intussusceptions, and (4) to compare age and sonographic findings of foals with and without asymptomatic intussusceptions. ANIMALS Eighteen healthy Standardbred foals ≤5 days of age. METHODS Prospective, cross-sectional blinded study. Gastrointestinal sonograms were performed stall-side. Intraobserver variability in wall thickness measurements was determined by calculating the coefficient of variation (CV). The Bland-Altman method was used to assess interobserver bias. Student's t-test and Fisher's exact test were used to test the association among presence of intussusceptions, age, and selected sonographic findings. RESULTS The reference ranges (95% predictive interval) for wall thickness were 1.6-3.6 mm for the stomach, 1.9-3.2 mm for the duodenum, 1.9-3.1 mm for the jejunum, 1.3-2.2 mm for the colon, and 0.8-2.7 mm for the cecum. Intraobserver wall thickness CV ranged from 8 to 21% for the 2 observers for 5 gastrointestinal segments. The interobserver bias for wall thickness measurements was not significant except for the stomach (0.14 mm, P < .05) and duodenum (0.29 mm, P < .05). Diagnostic images of mural blood flow could not be obtained. Asymptomatic intussusceptions were found in 10/18 neonates. Associations between sonographic variables or age and the presence of intussusceptions were not found. CONCLUSIONS AND CLINICAL IMPORTANCE Sonographic characteristics of the GI tract of normal Standardbred neonates can be useful in evaluating ill foals. Asymptomatic small intestinal intussusceptions occur in normal Standardbred neonates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND The Cochrane risk of bias (RoB) tool has been widely embraced by the systematic review community, but several studies have reported that its reliability is low. We aim to investigate whether training of raters, including objective and standardized instructions on how to assess risk of bias, can improve the reliability of this tool. We describe the methods that will be used in this investigation and present an intensive standardized training package for risk of bias assessment that could be used by contributors to the Cochrane Collaboration and other reviewers. METHODS/DESIGN This is a pilot study. We will first perform a systematic literature review to identify randomized clinical trials (RCTs) that will be used for risk of bias assessment. Using the identified RCTs, we will then do a randomized experiment, where raters will be allocated to two different training schemes: minimal training and intensive standardized training. We will calculate the chance-corrected weighted Kappa with 95% confidence intervals to quantify within- and between-group Kappa agreement for each of the domains of the risk of bias tool. To calculate between-group Kappa agreement, we will use risk of bias assessments from pairs of raters after resolution of disagreements. Between-group Kappa agreement will quantify the agreement between the risk of bias assessment of raters in the training groups and the risk of bias assessment of experienced raters. To compare agreement of raters under different training conditions, we will calculate differences between Kappa values with 95% confidence intervals. DISCUSSION This study will investigate whether the reliability of the risk of bias tool can be improved by training raters using standardized instructions for risk of bias assessment. One group of inexperienced raters will receive intensive training on risk of bias assessment and the other will receive minimal training. By including a control group with minimal training, we will attempt to mimic what many review authors commonly have to do, that is-conduct risk of bias assessment in RCTs without much formal training or standardized instructions. If our results indicate that an intense standardized training does improve the reliability of the RoB tool, our study is likely to help improve the quality of risk of bias assessments, which is a central component of evidence synthesis.