82 resultados para Return of results
Resumo:
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modelling haplotype variation, Hardy-Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
Resumo:
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information into the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the STrengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modeling haplotype variation, Hardy-Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and issues of data volume that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
Resumo:
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modeling haplotype variation, Hardy-Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
Resumo:
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modeling haplotype variation, Hardy-Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
Resumo:
Cell death is essential for a plethora of physiological processes, and its deregulation characterizes numerous human diseases. Thus, the in-depth investigation of cell death and its mechanisms constitutes a formidable challenge for fundamental and applied biomedical research, and has tremendous implications for the development of novel therapeutic strategies. It is, therefore, of utmost importance to standardize the experimental procedures that identify dying and dead cells in cell cultures and/or in tissues, from model organisms and/or humans, in healthy and/or pathological scenarios. Thus far, dozens of methods have been proposed to quantify cell death-related parameters. However, no guidelines exist regarding their use and interpretation, and nobody has thoroughly annotated the experimental settings for which each of these techniques is most appropriate. Here, we provide a nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls. These guidelines are intended for investigators who study cell death, as well as for reviewers who need to constructively critique scientific reports that deal with cellular demise. Given the difficulties in determining the exact number of cells that have passed the point-of-no-return of the signaling cascades leading to cell death, we emphasize the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
OBJECTIVE: To review trial design issues related to control groups. DESIGN: Review of the literature with specific reference to critical care trials. MAIN RESULTS AND CONCLUSIONS: Performing randomized controlled trials in the critical care setting presents specific problems: studies include patients with rapidly lethal conditions, the majority of intensive care patients suffer from syndromes rather than from well-definable diseases, the severity of such syndromes cannot be precisely assessed, and the treatment consists of interacting therapies. Interactions between physiology, pathophysiology, and therapies are at best marginally understood and may have a major impact on study design and interpretation of results. Selection of the right control group is crucial for the interpretation and clinical implementation of results. Studies comparing new interventions with current ones or different levels of current treatments have the problem of the necessity of defining "usual care." Usual care controls without any constraints typically include substantial heterogeneity. Constraints in the usual therapy may help to reduce some variation. Inclusion of unrestricted usual care groups may help to enhance safety. Practice misalignment is a novel problem in which patients receive a treatment that is the direct opposite of usual care, and occurs when fixed-dose interventions are used in situations where care is normally titrated. Practice misalignment should be considered in the design and interpretation of studies on titrated therapies.
Resumo:
OBJECTIVES To identify factors associated with discrepant outcome reporting in randomized drug trials. STUDY DESIGN AND SETTING Cohort study of protocols submitted to a Swiss ethics committee 1988-1998: 227 protocols and amendments were compared with 333 matching articles published during 1990-2008. Discrepant reporting was defined as addition, omission, or reclassification of outcomes. RESULTS Overall, 870 of 2,966 unique outcomes were reported discrepantly (29.3%). Among protocol-defined primary outcomes, 6.9% were not reported (19 of 274), whereas 10.4% of reported outcomes (30 of 288) were not defined in the protocol. Corresponding percentages for secondary outcomes were 19.0% (284 of 1,495) and 14.1% (334 of 2,375). Discrepant reporting was more likely if P values were <0.05 compared with P ≥ 0.05 [adjusted odds ratio (aOR): 1.38; 95% confidence interval (CI): 1.07, 1.78], more likely for efficacy compared with harm outcomes (aOR: 2.99; 95% CI: 2.08, 4.30) and more likely for composite than for single outcomes (aOR: 1.48; 95% CI: 1.00, 2.20). Cardiology (aOR: 2.34; 95% CI: 1.44, 3.79) and infectious diseases (aOR: 1.77; 95% CI: 1.01, 3.13) had more discrepancies compared with all specialties combined. CONCLUSION Discrepant reporting was associated with statistical significance of results, type of outcome, and specialty area. Trial protocols should be made freely available, and the publications should describe and justify any changes made to protocol-defined outcomes.
Resumo:
We evaluated three molecular methods for identification of Francisella strains: pulsed-field gel electrophoresis (PFGE), amplified fragment length polymorphism (AFLP) analysis, and 16S rRNA gene sequencing. The analysis was performed with 54 Francisella tularensis subsp. holarctica, 5 F. tularensis subsp. tularensis, 2 F. tularensis subsp. novicida, and 1 F. philomiragia strains. On the basis of the combination of results obtained by PFGE with the restriction enzymes XhoI and BamHI, PFGE revealed seven pulsotypes, which allowed us to discriminate the strains to the subspecies level and which even allowed us to discriminate among some isolates of F. tularensis subsp. holarctica. The AFLP analysis technique produced some degree of discrimination among F. tularensis subsp. holarctica strains (one primary cluster with three major subclusters and minor variations within subclusters) when EcoRI-C and MseI-A, EcoRI-T and MseI-T, EcoRI-A and MseI-C, and EcoRI-0 and MseI-CA were used as primers. The degree of similarity among the strains was about 94%. The percent similarities of the AFLP profiles of this subspecies compared to those of F. tularensis subsp. tularensis, F. tularensis subsp. novicida, and F. philomiragia were less than 90%, about 72%, and less than 24%, respectively, thus permitting easy differentiation of this subspecies. 16S rRNA gene sequencing revealed 100% similarity for all F. tularensis subsp. holarctica isolates compared in this study. These results suggest that although limited genetic heterogeneity among F. tularensis subsp. holarctica isolates was observed, PFGE and AFLP analysis appear to be promising tools for the diagnosis of infections caused by different subspecies of F. tularensis and suitable techniques for the differentiation of individual strains.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
Background: The design of Virtual Patients (VPs) is essential. So far there are no validated evaluation instruments for VP design published. Summary of work: We examined three sources of validity evidence of an instrument to be filled out by students aimed at measuring the quality of VPs with a special emphasis on fostering clinical reasoning: (1) Content was examined based on theory of clinical reasoning and an international VP expert team. (2) Response process was explored in think aloud pilot studies with students and content analysis of free text questions accompanying each item of the instrument. (3) Internal structure was assessed by confirmatory factor analysis (CFA) using 2547 student evaluations and reliability was examined utilizing generalizability analysis. Summary of results: Content analysis was supported by theory underlying Gruppen and Frohna’s clinical reasoning model on which the instrument is based and an international VP expert team. The pilot study and analysis of free text comments supported the validity of the instrument. The CFA indicated that a three factor model comprising 6 items showed a good fit with the data. Alpha coefficients per factor were 0,74 - 0,82. The findings of the generalizability studies indicated that 40-200 student responses are needed in order to obtain reliable data on one VP. Conclusions: The described instrument has the potential to provide faculty with reliable and valid information about VP design. Take-home messages: We present a short instrument which can be of help in evaluating the design of VPs.
Resumo:
Gender-fair language, including women and men, such as word pairs has a substantial impact on the mental representation, as a large body of studies have shown. When using exclusively the masculine form as a generic, women are mentally significantly less represented than men. Word pairs, however, lead to a higher cognitive inclusion of women. Surprisingly little research has been conducted to understand how the perception of professional groups is affected by gender-fair language. Providing evidence from an Italian-Austrian cross-cultural study with over 400 participants, we argue that gender-fair language impacts the perception of professional groups, in terms of perceived gender-typicality, number of women and men assumed for a profession, social status and average income. Results hint at a pervasive pay-off: on the one hand, gender-fair language seems to boost the mental representations in favor of women and professions are perceived as being rather gender-neutral. On the other hand professional groups are assigned lower salary and social status with word pairs. Implications of results are discussed.
Resumo:
The prevalence of gastric mucosal lesions in the thoroughbred race horse has been the subject of numerous studies. The frequency of gastric ulcer diseases in the adult horse of other sport disciplines are less well investigated. Recent data show that gastric mucosal lesions in non thoroughbred racehorses occur considerably more frequently than previously thought. Prevalences of up to 93 % in endurance horses, of up to 87 % in standardbreds, of 40 % in western horses, of 63 % in show-jumping horses, of 71 % in broodmares and of 53 % in leisure horses are reported. Since the introduction of gastroscopy in equine medicine in the 1990s, numerous scoring-systems to describe the number, the severity and the localisation of the lesions have been used. Unfortunately, no standardized scoring system is generally accepted to date. A direct comparison of results from different studies is therefore difficult. Comparison and interpretation of data is further hampered by the heterogenicity of the study populations which consist of horses of different age-groups, breeds and exercise intensity.
Resumo:
Computational network analysis provides new methods to analyze the human connectome. Brain structural networks can be characterized by global and local metrics that recently gave promising insights for diagnosis and further understanding of neurological, psychiatric and neurodegenerative disorders. In order to ensure the validity of results in clinical settings the precision and repeatability of the networks and the associated metrics must be evaluated. In the present study, nineteen healthy subjects underwent two consecutive measurements enabling us to test reproducibility of the brain network and its global and local metrics. As it is known that the network topology depends on the network density, the effects of setting a common density threshold for all networks were also assessed. Results showed good to excellent repeatability for global metrics, while for local metrics it was more variable and some metrics were found to have locally poor repeatability. Moreover, between subjects differences were slightly inflated when the density was not fixed. At the global level, these findings confirm previous results on the validity of global network metrics as clinical biomarkers. However, the new results in our work indicate that the remaining variability at the local level as well as the effect of methodological characteristics on the network topology should be considered in the analysis of brain structural networks and especially in networks comparisons.
Resumo:
The rate of destruction of tropical forests continues to accelerate at an alarming rate contributing to an important fraction of overall greenhouse gas emissions. In recent years, much hope has been vested in the emerging REDD+ framework under the UN Framework Convention on Climate Change (UNFCCC), which aims at creating an international incentive system to reduce emissions from deforestation and forest degradation. This paper argues that in the absence of an international consensus on the design of results-based payments, “bottom-up” initiatives should take the lead and explore new avenues. It suggests that a call for tender for REDD+ credits might both assist in leveraging private investments and spending scarce public funds in a cost-efficient manner. The paper discusses the pros and cons of results-based approaches, provides an overview of the goals and principles that govern public procurement and discusses their relevance for the purchase of REDD+ credits, in particular within the ambit of the European Union.