57 resultados para Clinical Epidemiology
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Advances in laboratory techniques have led to a rapidly increasing use of biomarkers in epidemiological studies. Biomarkers of internal dose, early biological change susceptibility and clinical outcomes are used as proxies for investigating the interactions between external and/or endogenous agents and body components or processes. The need for improved reporting of scientific research led to influential statements of recommendations such as the STrengthening Reporting of OBservational studies in Epidemiology (STROBE) statement. The STROBE initiative established in 2004 aimed to provide guidance on how to report observational research. Its guidelines provide a user-friendly checklist of 22 items to be reported in epidemiological studies, with items specific to the three main study designs: cohort studies, case-control studies and cross-sectional studies. The present STrengthening the Reporting of OBservational studies in Epidemiology -Molecular Epidemiology (STROBE-ME) initiative builds on the STROBE statement implementing 9 existing items of STROBE and providing 17 additional items to the 22 items of STROBE checklist. The additions relate to the use of biomarkers in epidemiological studies, concerning collection, handling and storage of biological samples; laboratory methods, validity and reliability of biomarkers; specificities of study design; and ethical considerations. The STROBE-ME recommendations are intended to complement the STROBE recommendations.
Resumo:
Much of biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. Eighteen items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
Resumo:
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence, the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association (STREGA) studies initiative builds on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modeling haplotype variation, Hardy-Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed, but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
Resumo:
OBJECTIVES Although the use of an adjudication committee (AC) for outcomes is recommended in randomized controlled trials, there are limited data on the process of adjudication. We therefore aimed to assess whether the reporting of the adjudication process in venous thromboembolism (VTE) trials meets existing quality standards and which characteristics of trials influence the use of an AC. STUDY DESIGN AND SETTING We systematically searched MEDLINE and the Cochrane Library from January 1, 2003, to June 1, 2012, for randomized controlled trials on VTE. We abstracted information about characteristics and quality of trials and reporting of adjudication processes. We used stepwise backward logistic regression model to identify trial characteristics independently associated with the use of an AC. RESULTS We included 161 trials. Of these, 68.9% (111 of 161) reported the use of an AC. Overall, 99.1% (110 of 111) of trials with an AC used independent or blinded ACs, 14.4% (16 of 111) reported how the adjudication decision was reached within the AC, and 4.5% (5 of 111) reported on whether the reliability of adjudication was assessed. In multivariate analyses, multicenter trials [odds ratio (OR), 8.6; 95% confidence interval (CI): 2.7, 27.8], use of a data safety-monitoring board (OR, 3.7; 95% CI: 1.2, 11.6), and VTE as the primary outcome (OR, 5.7; 95% CI: 1.7, 19.4) were associated with the use of an AC. Trials without random allocation concealment (OR, 0.3; 95% CI: 0.1, 0.8) and open-label trials (OR, 0.3; 95% CI: 0.1, 1.0) were less likely to report an AC. CONCLUSION Recommended processes of adjudication are underreported and lack standardization in VTE-related clinical trials. The use of an AC varies substantially by trial characteristics.
Resumo:
OBJECTIVE To describe a novel CONsolidated Standards of Reporting Trials (CONSORT) adherence strategy implemented by the American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO) and to report its impact on the completeness of reporting of published trials. STUDY DESIGN AND SETTING The AJO-DO CONSORT adherence strategy, initiated in June 2011, involves active assessment of randomized clinical trial (RCT) reporting during the editorial process. The completeness of reporting CONSORT items was compared between trials submitted and published during the implementation period (July 2011 to September 2013) and trials published between August 2007 and July 2009. RESULTS Of the 42 RCTs submitted (July 2011 to September 2013), 23 were considered for publication and assessed for completeness of reporting, seven of which were eventually published. For all published RCTs between 2007 and 2009 (n = 20), completeness of reporting by CONSORT item ranged from 0% to 100% (Median = 40%, interquartile range = 60%). All published trials in 2011-2013, reported 33 of 37 CONSORT (sub) items. Four CONSORT 2010 checklist items remained problematic even after implementation of the adherence strategy: changes to methods (3b), changes to outcomes (6b) after the trial commenced, interim analysis (7b), and trial stopping (14b), which are typically only reported when applicable. CONCLUSION Trials published following implementation of the AJO-DO CONSORT adherence strategy completely reported more CONSORT items than those published or submitted previously.
Resumo:
OBJECTIVES To compare the methodological quality of systematic reviews (SRs) published in high- and low-impact factor (IF) Core Clinical Journals. In addition, we aimed to record the implementation of aspects of reporting, including Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) flow diagram, reasons for study exclusion, and use of recommendations for interventions such as Grading of Recommendations Assessment, Development and Evaluation (GRADE). STUDY DESIGN AND SETTING We searched PubMed for systematic reviews published in Core Clinical Journals between July 1 and December 31, 2012. We evaluated the methodological quality using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. RESULTS Over the 6-month period, 327 interventional systematic reviews were identified with a mean AMSTAR score of 63.3% (standard deviation, 17.1%), when converted to a percentage scale. We identified deficiencies in relation to a number of quality criteria including delineation of excluded studies and assessment of publication bias. We found that SRs published in higher impact journals were undertaken more rigorously with higher percentage AMSTAR scores (per IF unit: β = 0.68%; 95% confidence interval: 0.32, 1.04; P < 0.001), a discrepancy likely to be particularly relevant when differences in IF are large. CONCLUSION Methodological quality of SRs appears to be better in higher impact journals. The overall quality of SRs published in many Core Clinical Journals remains suboptimal.
Resumo:
OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.
Resumo:
OBJECTIVES To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Resumo:
Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.
Resumo:
Objective To examine the registration of noninferiority trials, with a focus on the reporting of study design and noninferiority margins. Study Design and Setting Cross-sectional study of registry records of noninferiority trials published from 2005 to 2009 and records of noninferiority trials in the International Standard Randomized Controlled Trial Number (ISRCTN) or ClinicalTrials.gov trial registries. The main outcome was the proportion of records that reported the noninferiority design and margin. Results We analyzed 87 registry records of published noninferiority trials and 149 registry records describing noninferiority trials. Thirty-five (40%) of 87 records from published trials described the trial as a noninferiority trial; only two (2%) reported the noninferiority margin. Reporting of the noninferiority design was more frequent in the ISRCTN registry (13 of 18 records, 72%) compared with ClinicalTrials.gov (22 of 69 records, 32%; P = 0.002). Among the 149 records identified in the registries, 13 (9%) reported the noninferiority margin. Only one of the industry-sponsored trial compared with 11 of the publicly funded trials reported the margin (P = 0.001). Conclusion Most registry records of noninferiority trials do not mention the noninferiority design and do not include the noninferiority margin. The registration of noninferiority trials is unsatisfactory and must be improved.
Resumo:
Evaluation and validation of the psychometric properties of the eight-item modified Medical Outcomes Study Social Support Survey (mMOS-SS).
Resumo:
For continuous outcomes measured using instruments with an established minimally important difference (MID), pooled estimates can be usefully reported in MID units. Approaches suggested thus far omit studies that used instruments without an established MID. We describe an approach that addresses this limitation.
Resumo:
Meta-analysis of predictive values is usually discouraged because these values are directly affected by disease prevalence, but sensitivity and specificity sometimes show substantial heterogeneity as well. We propose a bivariate random-effects logitnormal model for the meta-analysis of the positive predictive value (PPV) and negative predictive value (NPV) of diagnostic tests.