946 resultados para Journal impact factor
Resumo:
Background: Universities worldwide are seeking objective measures for the assessment of their faculties` research products to evaluate them and to attain prestige. Despite concerns, the impact factors (IF) of journals where faculties publish have been adopted. Research objective: The study aims to explore conditions created within five countries as a result of policies requiring or not requiring faculty to publish in high IF journals, and the extent to which these facilitated or hindered the development of nursing science. Design: The design was a multiple case study of Brazil, Taiwan, Thailand (with IF policies, Group A), United Kingdom and the United States (no IF policies, Group B). Key informants from each country were identified to assist in subject recruitment. Methods: A questionnaire was developed for data collection. The study was approved by a human subject review committee. Five faculty members of senior rank from each country participated. All communication occurred electronically. Findings: Groups A and B countries differed on who used the policy and the purposes for which it was used. There were both similarities and differences across the five countries with respect to hurdles, scholar behaviour, publishing locally vs. internationally, views of their science, steps taken to internationalize their journals. Conclusions: In group A countries, Taiwan seemed most successful in developing its scholarship. Group B countries have continued their scientific progress without such policies. IF policies were not necessary motivators of scholarship; factors such as qualified nurse scientists, the resource base in the country, may be critical factors in supporting science development.
Resumo:
[EN] The journal impact factor is not comparable among fields of science because of systematic differences in publication and citation behaviour across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor. An empirical application in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analysed.
Resumo:
Objectives: Publication bias may affect the validity of evidence based medical decisions. The aim of this study is to assess whether research outcomes affect the dissemination of clinical trial findings, in terms of rate, time to publication, and impact factor of journal publications. Methods and Findings: All drug-evaluating clinical trials submitted to and approved by a general hospital ethics committee between 1997 and 2004 were prospectively followed to analyze their fate and publication. Published articles were identified by searching Pubmed and other electronic databases. Clinical study final reports submitted to the ethics committee, final reports synopses available online and meeting abstracts were also considered as sources of study results. Study outcomes were classified as positive (when statistical significance favoring experimental drug was achieved), negative (when no statistical significance was achieved or it favored control drug) and descriptive (for non-controlled studies). Time to publication was defined as time from study closure to publication. A survival analysis was performed using a Cox regression model to analyze time to publication. Journal impact factors of identified publications were recorded. Publication rate was 48·4% (380/785). Study results were identified for 68·9% of all completed clinical trials (541/785). Publication rate was 84·9% (180/212) for studies with results classified as positive and 68·9% (128/186) for studies with results classified as negative (p<0·001). Median time to publication was 2·09 years (IC95 1·61-2·56) for studies with results classified as positive and 3·21 years (IC95 2·69-3·70) for studies with results classified as negative (hazard ratio 1·99 (IC95 1·55-2·55). No differences were found in publication impact factor between positive (median 6·308, interquartile range: 3·141-28·409) and negative result studies (median 8·266, interquartile range: 4·135-17·157). Conclusions: Clinical trials with positive outcomes have significantly higher rates and shorter times to publication than those with negative results. However, no differences have been found in terms of impact factor.
Resumo:
We present a critical analysis of the generalized use of the "impact factor". By means of the Kruskal-Wallis test, it was shown that it is not possible to compare distinct disciplines using the impact factor without adjustments. After assigning the median journal the value of one (1.000), the impact factor value for each journal was calculated by the rule of three. The adjusted values were homogeneous, thus permitting comparison among distinct disciplines.
Resumo:
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
La publicación de artículos científicos en revistas con alto factor de impacto es uno de los parámetros que más se valora en la carrera de los investigadores. Aunque existen distintas instituciones que estiman diversos índices de repercusión, el único índice oficial y valorado de forma internacional es el Impact Factor (IF) de Thomson Reuters® que anualmente publica en el Journal Citation Reports (JCR). En este punto, es importante señalar que en estos momentos RENHyD está siendo observada por Thomson Reuters®, al haber sido incluida en un nuevo recurso de información de este productor, llamado ESCI (Emerging Sources Citation Index) y que es un nuevo índice de la colección principal de Web of ScienceT. Creemos que es necesario que todos los implicados en la revista (lectores e investigadores) sepan cómo pueden ayudar a que la revista crezca y mejore su repercusión, la única estrategia es citar los artículos publicados en RENHyD para fundamentar los nuevos trabajos que se publican en otras revistas con factor de impacto. El consumo de nuestras publicaciones es la estrategia que ayude a conseguir el factor de impacto.