873 resultados para Legislative journals
Resumo:
OBJECTIVES A widespread assessment of the reporting of RCT abstracts published in dental journals is lacking. Our aim was to investigate the quality of reporting of abstracts published in leading dental specialty journals using, as a guide, the CONSORT for abstracts checklist. METHODS Electronic and supplementary hand searching were undertaken to identify RCTs published in seven dental specialty journals. The quality of abstract reporting was evaluated using a modified checklist based on the CONSORT for abstracts checklist. Descriptive statistics followed by univariate and multivariate analyses were conducted. RESULTS 228 RCT abstracts were identified. Reporting of interventions, objectives and conclusions within abstracts were adequate. Inadequately reported items included: title, participants, outcomes, random number generation, numbers randomized and effect size estimate. Randomization restrictions, allocation concealment, blinding, numbers analyzed, confidence intervals, intention-to-treat analysis, harms, registration and funding were rarely described. CONCLUSIONS The mean overall reporting quality score was suboptimal at 62.5% (95% CI: 61.9, 63.0). Significantly better abstract reporting was noted in certain specialty journals and in multicenter trials.
Resumo:
OBJECTIVES In dental research multiple site observations within patients or taken at various time intervals are commonplace. These clustered observations are not independent; statistical analysis should be amended accordingly. This study aimed to assess whether adjustment for clustering effects during statistical analysis was undertaken in five specialty dental journals. METHODS Thirty recent consecutive issues of Orthodontics (OJ), Periodontology (PJ), Endodontology (EJ), Maxillofacial (MJ) and Paediatric Dentristry (PDJ) journals were hand searched. Articles requiring adjustment accounting for clustering effects were identified and statistical techniques used were scrutinized. RESULTS Of 559 studies considered to have inherent clustering effects, adjustment for this was made in the statistical analysis in 223 (39.1%). Studies published in the Periodontology specialty accounted for clustering effects in the statistical analysis more often than articles published in other journals (OJ vs. PJ: OR=0.21, 95% CI: 0.12, 0.37, p<0.001; MJ vs. PJ: OR=0.02, 95% CI: 0.00, 0.07, p<0.001; PDJ vs. PJ: OR=0.14, 95% CI: 0.07, 0.28, p<0.001; EJ vs. PJ: OR=0.11, 95% CI: 0.06, 0.22, p<0.001). A positive correlation was found between increasing prevalence of clustering effects in individual specialty journals and correct statistical handling of clustering (r=0.89). CONCLUSIONS The majority of studies in 5 dental specialty journals (60.9%) examined failed to account for clustering effects in statistical analysis where indicated, raising the possibility of inappropriate decreases in p-values and the risk of inappropriate inferences.
Resumo:
The aims of this study were to assess and compare the methodological quality of Cochrane and non-Cochrane systematic reviews (SRs) published in leading orthodontic journals and the Cochrane Database of Systematic Reviews (CDSR) using AMSTAR and to compare the prevalence of meta-analysis in both review types. A literature search was undertaken to identify SRs that consisted of hand-searching five major orthodontic journals [American Journal of Orthodontics and Dentofacial Orthopedics, Angle Orthodontist, European Journal of Orthodontics, Journal of Orthodontics and Orthodontics and Craniofacial Research (February 2002 to July 2011)] and the Cochrane Database of Systematic Reviews from January 2000 to July 2011. Methodological quality of the included reviews was gauged using the AMSTAR tool involving 11 key methodological criteria with a score of 0 or 1 given for each criterion. A cumulative grade was given for the paper overall (0-11); an overall score of 4 or less represented poor methodological quality, 5-8 was considered fair and 9 or greater was deemed to be good. In total, 109 SRs were identified in the five major journals and on the CDSR. Of these, 26 (23.9%) were in the CDSR. The mean overall AMSTAR score was 6.2 with 21.1% of reviews satisfying 9 or more of the 11 criteria; a similar prevalence of poor reviews (22%) was also noted. Multiple linear regression indicated that reviews published in the CDSR (P < 0.01); and involving meta-analysis (β = 0.50, 95% confidence interval 0.72, 2.07, P < 0.001) showed greater concordance with AMSTAR.
Resumo:
von I. M. Jost
Resumo:
von I. M. Jost
Resumo:
RATIONALE In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. OBJECTIVE To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. METHODS AND RESULTS All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). LIMITATIONS The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. CONCLUSION In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
Resumo:
PURPOSE Confidence intervals (CIs) are integral to the interpretation of the precision and clinical relevance of research findings. The aim of this study was to ascertain the frequency of reporting of CIs in leading prosthodontic and dental implantology journals and to explore possible factors associated with improved reporting. MATERIALS AND METHODS Thirty issues of nine journals in prosthodontics and implant dentistry were accessed, covering the years 2005 to 2012: The Journal of Prosthetic Dentistry, Journal of Oral Rehabilitation, The International Journal of Prosthodontics, The International Journal of Periodontics & Restorative Dentistry, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, The International Journal of Oral & Maxillofacial Implants, Implant Dentistry, and Journal of Dentistry. Articles were screened and the reporting of CIs and P values recorded. Other information including study design, region of authorship, involvement of methodologists, and ethical approval was also obtained. Univariable and multivariable logistic regression was used to identify characteristics associated with reporting of CIs. RESULTS Interrater agreement for the data extraction performed was excellent (kappa = 0.88; 95% CI: 0.87 to 0.89). CI reporting was limited, with mean reporting across journals of 14%. CI reporting was associated with journal type, study design, and involvement of a methodologist or statistician. CONCLUSIONS Reporting of CI in implant dentistry and prosthodontic journals requires improvement. Improved reporting will aid appraisal of the clinical relevance of research findings by providing a range of values within which the effect size lies, thus giving the end user the opportunity to interpret the results in relation to clinical practice.
Resumo:
OBJECTIVES To compare the methodological quality of systematic reviews (SRs) published in high- and low-impact factor (IF) Core Clinical Journals. In addition, we aimed to record the implementation of aspects of reporting, including Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) flow diagram, reasons for study exclusion, and use of recommendations for interventions such as Grading of Recommendations Assessment, Development and Evaluation (GRADE). STUDY DESIGN AND SETTING We searched PubMed for systematic reviews published in Core Clinical Journals between July 1 and December 31, 2012. We evaluated the methodological quality using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. RESULTS Over the 6-month period, 327 interventional systematic reviews were identified with a mean AMSTAR score of 63.3% (standard deviation, 17.1%), when converted to a percentage scale. We identified deficiencies in relation to a number of quality criteria including delineation of excluded studies and assessment of publication bias. We found that SRs published in higher impact journals were undertaken more rigorously with higher percentage AMSTAR scores (per IF unit: β = 0.68%; 95% confidence interval: 0.32, 1.04; P < 0.001), a discrepancy likely to be particularly relevant when differences in IF are large. CONCLUSION Methodological quality of SRs appears to be better in higher impact journals. The overall quality of SRs published in many Core Clinical Journals remains suboptimal.