72 resultados para Multimodal analyses
Resumo:
Stress response can be considered a consequence of psychological or physiological threats to the human organism. Elevated cortisol secretion represents a biological indicator of subjective stress. The extent of subjectively experienced stress depends on individual coping strategies or self-regulation skills. Because of their experience with competitive pressure, athletes might show less pronounced biological stress responses during stressful events compared to non-athletes. In the present study, the short version of the Berlin Intelligence Structure Test, a paper-pencil intelligence test, was used as an experimental stressor. Cortisol responses of 26 female Swiss elite athletes and 26 female non-athlete controls were compared. Salivary free cortisol responses were measured 15 minutes prior to, as well as immediately before and after psychometric testing. In both groups, a significant effect of time was found: High cortisol levels prior to testing decreased significantly during the testing session. Furthermore, athletes exhibited reliably lower cortisol levels than non-athlete controls. No significant interaction effects could be observed. The overall pattern of results supports the idea that elite athletes show a less pronounced cortisol-related stress response due to more efficient coping strategies.
Resumo:
This study aims to assess the impact of continued ranibizumab treatment for neovascular age-related macular degeneration on patients from the MARINA and ANCHOR randomised clinical studies who lost ≥ 3 lines of best-corrected visual acuity (BCVA) at any time during the first year of treatment.
Resumo:
Since erythropoiesis-stimulating agents (ESAs) were licensed in 1993, more than 70 randomized controlled trials and more than 20 meta-analyses and systematic reviews on their effectiveness were conducted. Here, we present a systematic review on the meta-analyses of trials evaluating ESAs in cancer patients.
Resumo:
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.
Resumo:
The assessment of treatment effects from observational studies may be biased with patients not randomly allocated to the experimental or control group. One way to overcome this conceptual shortcoming in the design of such studies is the use of propensity scores to adjust for differences of the characteristics between patients treated with experimental and control interventions. The propensity score is defined as the probability that a patient received the experimental intervention conditional on pre-treatment characteristics at baseline. Here, we review how propensity scores are estimated and how they can help in adjusting the treatment effect for baseline imbalances. We further discuss how to evaluate adequate overlap of baseline characteristics between patient groups, provide guidelines for variable selection and model building in modelling the propensity score, and review different methods of propensity score adjustments. We conclude that propensity analyses may help in evaluating the comparability of patients in observational studies, and may account for more potential confounding factors than conventional covariate adjustment approaches. However, bias due to unmeasured confounding cannot be corrected for.