937 resultados para Polish Impact Factor


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present contribution explores the impact of the QUALIS metric system for academic evaluation implemented by CAPES (Coordination for the Development of Personnel in Higher Education) upon Brazilian Zoological research. The QUALIS system is based on the grouping and ranking of scientific journals according to their Impact Factor (IF). We examined two main points implied by this system, namely: 1) its reliability as a guideline for authors; 2) if Zoology possesses the same publication profile as Botany and Oceanography, three fields of knowledge grouped by CAPES under the subarea "BOZ" for purposes of evaluation. Additionally, we tested CAPES' recent suggestion that the area of Ecology would represent a fourth field of research compatible with the former three. Our results indicate that this system of classification is inappropriate as a guideline for publication improvement, with approximately one third of the journals changing their strata between years. We also demonstrate that the citation profile of Zoology is distinct from those of Botany and Oceanography. Finally, we show that Ecology shows an IF that is significantly different from those of Botany, Oceanography, and Zoology, and that grouping these fields together would be particularly detrimental to Zoology. We conclude that the use of only one parameter of analysis for the stratification of journals, i.e., the Impact Factor calculated for a comparatively small number of journals, fails to evaluate with accuracy the pattern of publication present in Zoology, Botany, and Oceanography. While such simplified procedure might appeals to our sense of objectivity, it dismisses any real attempt to evaluate with clarity the merit embedded in at least three very distinct aspects of scientific practice, namely: productivity, quality, and specificity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Universities worldwide are seeking objective measures for the assessment of their faculties` research products to evaluate them and to attain prestige. Despite concerns, the impact factors (IF) of journals where faculties publish have been adopted. Research objective: The study aims to explore conditions created within five countries as a result of policies requiring or not requiring faculty to publish in high IF journals, and the extent to which these facilitated or hindered the development of nursing science. Design: The design was a multiple case study of Brazil, Taiwan, Thailand (with IF policies, Group A), United Kingdom and the United States (no IF policies, Group B). Key informants from each country were identified to assist in subject recruitment. Methods: A questionnaire was developed for data collection. The study was approved by a human subject review committee. Five faculty members of senior rank from each country participated. All communication occurred electronically. Findings: Groups A and B countries differed on who used the policy and the purposes for which it was used. There were both similarities and differences across the five countries with respect to hurdles, scholar behaviour, publishing locally vs. internationally, views of their science, steps taken to internationalize their journals. Conclusions: In group A countries, Taiwan seemed most successful in developing its scholarship. Group B countries have continued their scientific progress without such policies. IF policies were not necessary motivators of scholarship; factors such as qualified nurse scientists, the resource base in the country, may be critical factors in supporting science development.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Publication bias may affect the validity of evidence based medical decisions. The aim of this study is to assess whether research outcomes affect the dissemination of clinical trial findings, in terms of rate, time to publication, and impact factor of journal publications. Methods and Findings: All drug-evaluating clinical trials submitted to and approved by a general hospital ethics committee between 1997 and 2004 were prospectively followed to analyze their fate and publication. Published articles were identified by searching Pubmed and other electronic databases. Clinical study final reports submitted to the ethics committee, final reports synopses available online and meeting abstracts were also considered as sources of study results. Study outcomes were classified as positive (when statistical significance favoring experimental drug was achieved), negative (when no statistical significance was achieved or it favored control drug) and descriptive (for non-controlled studies). Time to publication was defined as time from study closure to publication. A survival analysis was performed using a Cox regression model to analyze time to publication. Journal impact factors of identified publications were recorded. Publication rate was 48·4% (380/785). Study results were identified for 68·9% of all completed clinical trials (541/785). Publication rate was 84·9% (180/212) for studies with results classified as positive and 68·9% (128/186) for studies with results classified as negative (p<0·001). Median time to publication was 2·09 years (IC95 1·61-2·56) for studies with results classified as positive and 3·21 years (IC95 2·69-3·70) for studies with results classified as negative (hazard ratio 1·99 (IC95 1·55-2·55). No differences were found in publication impact factor between positive (median 6·308, interquartile range: 3·141-28·409) and negative result studies (median 8·266, interquartile range: 4·135-17·157). Conclusions: Clinical trials with positive outcomes have significantly higher rates and shorter times to publication than those with negative results. However, no differences have been found in terms of impact factor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a critical analysis of the generalized use of the "impact factor". By means of the Kruskal-Wallis test, it was shown that it is not possible to compare distinct disciplines using the impact factor without adjustments. After assigning the median journal the value of one (1.000), the impact factor value for each journal was calculated by the rule of three. The adjusted values were homogeneous, thus permitting comparison among distinct disciplines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The journal impact factor is not comparable among fields of science because of systematic differences in publication and citation behaviour across disciplines. In this work, a source normalization of the journal impact factor is proposed. We use the aggregate impact factor of the citing journals as a measure of the citation potential in the journal topic, and we employ this citation potential in the normalization of the journal impact factor. An empirical application in a set of 224 journals from four different fields shows that our normalization, using the citation potential in the journal topic, reduces the between-group variance with respect to the within-group variance in a higher proportion than the rest of indicators analysed.