4 resultados para VIRREY CEVALLOS
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Objectives Appropriate reporting is central to the application of findings from research to clinical practice. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations consist of a checklist of 22 items that provide guidance on the reporting of cohort, case-control and cross-sectional studies, in order to facilitate critical appraisal and interpretation of results. STROBE was published in October 2007 in several journals including The Lancet, BMJ, Annals of Internal Medicine and PLoS Medicine. Within the framework of the revision of the STROBE recommendations, the authors examined the context and circumstances in which the STROBE statement was used in the past. Design The authors searched the Web of Science database in August 2010 for articles which cited STROBE and examined a random sample of 100 articles using a standardised, piloted data extraction form. The use of STROBE in observational studies and systematic reviews (including meta-analyses) was classified as appropriate or inappropriate. The use of STROBE to guide the reporting of observational studies was considered appropriate. Inappropriate uses included the use of STROBE as a tool to assess the methodological quality of studies or as a guideline on how to design and conduct studies. Results The authors identified 640 articles that cited STROBE. In the random sample of 100 articles, about half were observational studies (32%) or systematic reviews (19%). Comments, editorials and letters accounted for 15%, methodological articles for 8%, and recommendations and narrative reviews for 26% of articles. Of the 32 observational studies, 26 (81%) made appropriate use of STROBE, and three uses (10%) were considered inappropriate. Among 19 systematic reviews, 10 (53%) used STROBE inappropriately as a tool to assess study quality. Conclusions The STROBE reporting recommendations are frequently used inappropriately in systematic reviews and meta-analyses as an instrument to assess the methodological quality of observational studies.
Resumo:
OBJECTIVES To compare noninferiority margins defined in study protocols and trial registry records with margins reported in subsequent publications. STUDY DESIGN AND SETTING Comparison of protocols of noninferiority trials submitted 2001 to 2005 to ethics committees in Switzerland and The Netherlands with corresponding publications and registry records. We searched MEDLINE via PubMed, the Cochrane Controlled Trials Register (Cochrane Library issue 01/2012), and Google Scholar in September 2013 to identify published reports, and the International Clinical Trials Registry Platform of the World Health Organization in March 2013 to identify registry records. Two readers recorded the noninferiority margin and other data using a standardized data-abstraction form. RESULTS The margin was identical in study protocol and publication in 43 (80%) of 54 pairs of study protocols and articles. In the remaining pairs, reporting was inconsistent (five pairs, 9%), or the noninferiority margin was either not reported in the publication (five pairs, 9%) or not defined in the study protocol (one pair). The confidence interval or the exact P-value required to judge whether the result was compatible with noninferior, inferior, or superior efficacy was reported in 43 (80%) publications. Complete and consistent reporting of both noninferiority margin and confidence interval (or exact P-value) was present in 39 (72%) protocol-publication pairs. Twenty-nine trials (54%) were registered in trial registries, but only one registry record included the noninferiority margin. CONCLUSION The reporting of noninferiority margins was incomplete and inconsistent with study protocols in a substantial proportion of published trials, and margins were rarely reported in trial registries.
Resumo:
BACKGROUND/AIMS Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc). METHODS This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure. RESULTS We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10-0.58)) than in the concept arm (0.27 (0.06-0.50)), but the difference was not significant (p = 0.67). LIMITATIONS The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law. CONCLUSION A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.