326 resultados para rankings
Resumo:
The speed at which new scientific papers are published has increased dramatically, while the process of tracking the most recent publications having a high impact has become more and more cumbersome. In order to support learners and researchers in retrieving relevant articles and identifying the most central researchers within a domain, we propose a novel 2-mode multilayered graph derived from Cohesion Network Analysis (CNA). The resulting extended CNA graph integrates both authors and papers, as well as three principal link types: coauthorship, co-citation, and semantic similarity among the contents of the papers. Our rankings do not rely on the number of published documents, but on their global impact based on links between authors, citations, and semantic relatedness to similar articles. As a preliminary validation, we have built a network based on the 2013 LAK dataset in order to reveal the most central authors within the emerging Learning Analytics domain.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.
Resumo:
Com o objetivo decomparar as políticas de recrutamento e seleção nos programas de expatriaçãodas empresas brasileiras, considerando o número de profissionais expatriados,esta pesquisa se concentrou em estudar as teorias relacionadas ao tema, gestão internacional de recursos humanos, bem como, se aprofundar nas políticas e práticas das empresas brasileiras que possuem programas de expatriação. Para isso, foi feita uma pesquisa descritiva, a partir de um estudo transversal comas características da população. Sendo assim, a população dessa pesquisa, foi limitada aos Rankings das Transnacionais Brasileiras – 2011 e 2012 – levantado pela Fundação Dom Cabral – FDC. Das 52 empresas que compõe o Ranking em questão, 25 empresas participaram da pesquisa que se deu em formato de entrevista in loco. As informações levantadas foram trabalhadas com auxílio do software de análise qualitativa – Atlas TI – e as considerações sobre os resultados mostram que as empresas brasileiras além de possuírem políticas de recrutamento e seleção em seus programas de expatriação, também estão em momentos diferentes de aplicação dessas políticas.
Resumo:
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.
Resumo:
An effective strategy is critical for the successful development of e-Government. The leading nations in the e-Government rankings include Sweden, Norway, Denmark and Finland. Their leading role makes them interesting to study when looking for reasons to successful e-Government. The purpose of this research paper is to describe the e-Government development strategies of Nordic countries, which rank highly on the international stage. In particular it aims to study the foci of these strategies. The approach is a document study of the e-Government development strategies of Sweden, Denmark, Norway and Finland was carried out using a qualitative content analysis inductive method. The results show that the major focus of Nordic e-Government strategies is on public sector reforms. Other focus areas include economic reforms and, to a lesser extent, e-Democracy efforts. Sweden, Finland and Norway have set ambitious policy goals in order to achieve global leadership in e-Government development. In response to the question posed by this paper’s title, we can say that Nordic e-Government strategies, except for Norway, focus more on reforming public sector services than on economic reforms. E-Democracy reforms are hardly focused on at all. Practical implications: Public sector policy makers can relate their policy foci to some of the more successful e-Government countries in the world. Research implications/originality is that this paper can apart from the findings also provide a means on how to identify the actual foci of a country’s e-Government policy.
Resumo:
Recommendation systems aim to help users make decisions more efficiently. The most widely used method in recommendation systems is collaborative filtering, of which, a critical step is to analyze a user's preferences and make recommendations of products or services based on similarity analysis with other users' ratings. However, collaborative filtering is less usable for recommendation facing the "cold start" problem, i.e. few comments being given to products or services. To tackle this problem, we propose an improved method that combines collaborative filtering and data classification. We use hotel recommendation data to test the proposed method. The accuracy of the recommendation is determined by the rankings. Evaluations regarding the accuracies of Top-3 and Top-10 recommendation lists using the 10-fold cross-validation method and ROC curves are conducted. The results show that the Top-3 hotel recommendation list proposed by the combined method has the superiority of the recommendation performance than the Top-10 list under the cold start condition in most of the times.
Resumo:
Only those aggregate sources which have been sampled or tested within the last ten years are listed. This listing additionally ranks sources in accordance with a skid resistance classification as defined herein for aggregates used in asphalt construction. The rankings are based on the ledges used in the past for asphalt aggregates. Upon request, new sources or different combinations of beds within an existing source will be evaluated as to their skid resistance classification. This ranking refers only to the skid resistant properties and does not waive the normal quality requirements for the particular type of aggregate indicated in the contract documents.
Resumo:
The purpose of this study is to explore the link between decentralization and the impact of natural disasters through empirical analysis. It addresses the issue of the importance of the role of local government in disaster response through different means of decentralization. By studying data available for 50 countries, it allows to develop the knowledge on the role of national government in setting policy that allows flexibility and decision making at a local level and how this devolution of power influences the outcome of disasters. The study uses Aaron Schneider’s definition and rankings of decentralization, the EM-DAT database to identify the amount of people affected by disasters on average per year as well as World Bank Indicators and the Human Development Index (HDI) to model the role of local decentralization in mitigating disasters. With a multivariate regression it looks at the amount of affected people as explained by fiscal, administrative and political decentralization, government expenses, percentage of urbanization, total population, population density, the HDI and the overall Logistics Performance Indicator (LPI). The main results are that total population, the overall LPI and fiscal decentralization are all significant in relation to the amount of people affected by disasters for the countries and period studied. These findings have implication for government’s policies by indicating that fiscal decentralization by allowing local governments to control a bigger proportion of the countries revenues and expenditures plays a role in reducing the amount of affected people in disasters. This can be explained by the fact that local government understand their own needs better in both disaster prevention and response which helps in taking the proper decisions to mitigate the amount of people affected in a disaster. The reduction in the implication of national government might also play a role in reducing the time of reaction to face a disaster. The main conclusion of this study is that fiscal control by local governments can help reduce the amount of people affected by disasters.
Resumo:
Projeto de Graduação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de licenciado em Criminologia
Resumo:
Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal’s impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal’s impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article’s online appearance, rather than on that of its publication in print.
Resumo:
El texto es una parte de la investigación realizada sobre el nuevo escenario en el que se desenvuelven las universidades, que las pone en crisis y las empuja a cambiar. Esta revisión se ha hecho desde la óptica organizativa y estructural. En este artículo se recogen los principales rasgos que las ponen en crisis en la actualidad (globalización, mercantilización, masificación, competencia y rankings, nuevos competidores, creación de espacios comunes o procesos de convergencia, tendencias y requerimientos de los organismos internacionales respecto a las mismas). A lo largo de estas páginas se avanza que, cada vez más, desde diferentes expertos, informes e instituciones, se exige que las instituciones de educación superior tiendan a la colaboración entre sí, creando y trabajando en redes interuniversitarias de forma que les permitan afrontar con éxito la crisis, fortaleciéndose mutuamente y complementándose.
Resumo:
As climate change continues to impact socio-ecological systems, tools that assist conservation managers to understand vulnerability and target adaptations are essential. Quantitative assessments of vulnerability are rare because available frameworks are complex and lack guidance for dealing with data limitations and integrating across scales and disciplines. This paper describes a semi-quantitative method for assessing vulnerability to climate change that integrates socio-ecological factors to address management objectives and support decision-making. The method applies a framework first adopted by the Intergovernmental Panel on Climate Change and uses a structured 10-step process. The scores for each framework element are normalized and multiplied to produce a vulnerability score and then the assessed components are ranked from high to low vulnerability. Sensitivity analyses determine which indicators most influence the analysis and the resultant decision-making process so data quality for these indicators can be reviewed to increase robustness. Prioritisation of components for conservation considers other economic, social and cultural values with vulnerability rankings to target actions that reduce vulnerability to climate change by decreasing exposure or sensitivity and/or increasing adaptive capacity. This framework provides practical decision-support and has been applied to marine ecosystems and fisheries, with two case applications provided as examples: (1) food security in Pacific Island nations under climate-driven fish declines, and (2) fisheries in the Gulf of Carpentaria, northern Australia. The step-wise process outlined here is broadly applicable and can be undertaken with minimal resources using existing data, thereby having great potential to inform adaptive natural resource management in diverse locations.
Resumo:
Background: Gene expression studies are a prerequisite for understanding the biological function of genes. Because of its high sensitivity and easy use, quantitative PCR (qPCR) has become the gold standard for gene expression quantification. To normalise qPCR measurements between samples, the most prominent technique is the use of stably expressed endogenous control genes, the so called reference genes. However, recent studies show there is no universal reference gene for all biological questions. Roses are important ornamental plants for which there has been no evaluation of useful reference genes for gene expression studies. Results: We used three different algorithms (BestKeeper, geNorm and NormFinder) to validate the expression stability of nine candidate reference genes in different rose tissues from three different genotypes of Rosa hybrida and in leaves treated with various stress factors. The candidate genes comprised the classical "housekeeping genes" (Actin, EF-1α, GAPDH, Tubulin and Ubiquitin), and genes showing stable expression in studies in Arabidopsis (PP2A, SAND, TIP and UBC). The programs identified no single gene that showed stable expression under all of the conditions tested, and the individual rankings of the genes differed between the algorithms. Nevertheless the new candidate genes, specifically, PP2A and UBC, were ranked higher as compared to the other traditional reference genes. In general, Tubulin showed the most variable expression and should be avoided as a reference gene. Conclusions: Reference genes evaluated as suitable in experiments with Arabidopsis thaliana were stably expressed in roses under various experimental conditions. In most cases, these genes outperformed conventional reference genes, such as EF1-α and Tubulin. We identified PP2A, SAND and UBC as suitable reference genes, which in different combinations may be used for normalisation in expression analyses via qPCR for different rose tissues and stress treatments. However, the vast genetic variation found within the genus Rosa, including differences in ploidy levels, might also influence expression stability of reference genes, so that future research should also consider different genotypes and ploidy levels.
Resumo:
1. Un premier apport de notre travail consiste à proposer un cadre théorique, analytique et conceptuel original, permettant d'approcher la notion de qualité des publications en SHS (sciences humaines et sociales) et en sciences de la communication de façon à la fois holistique et dynamique, en tant qu'elle fait l'objet de descriptions et de jugements multiples, émis par une diversité de parties prenantes, au sein et en dehors des milieux académiques. Pour ce faire, il s'agira de considérer la qualité dans ses différentes dimensions constitutives (approche holistique) tout en l'inscrivant dans le cadre d'évolutions tendancielles en matière de publication scientifique (approche dynamique) et en tenant compte de la qualité telle qu'elle est prescrite, souhaitée et mise en oeuvre par les différentes parties prenantes (chercheurs et entités prescriptrices, aux niveaux politique et managérial). En croisant de façon systématique ces trois approches - approche multidimensionnelle, rapport aux prescrits et aux souhaits, et étude des évolutions tendancielles -, il s’avérera possible d'évaluer l'incidence des différentes tendances en matière de publication scientifique – i.e. tendances à la massification, à l'internationalisation, à l' « exotérisation » (i.e. à l'ouverture vers le monde extérieur, au-delà des pairs), à la « gestionnarisation » (i.e. à l'usage des publications dans la gestion dela recherche et des chercheurs, en particulier en situation d'évaluation), à la commercialisation et à l' « enlignement » (i.e. à la mise en ligne, sur Internet) – ainsi que des prescriptions managériales et politiques qui les initient, les stimulent ou les prolongent à des degrés divers, sur la qualité de l'activité même de publier, et sur celle des différents types génériques et spécifiques d'objets publiés.2. En appliquant cette triple approche aux SHS et, plus particulièrement, au cas des sciences de la communication, nous montrerons comment la plupart des évolutions tendancielles qui sont discutées ici ainsi que des prescrits politiques et managériaux qui y affèrent aboutissent à valoriser principalement, en situation d'évaluation de la recherche et des chercheurs, la publication d'un grand nombre d'articles dans des revues savantes internationales de premier plan, destinés avant tout aux pairs, et à dévaloriser les publications, ouvertes à des publics plus locaux, rédigées en langue vernaculaire, ou qui se consacreraient à la résolution de problèmes de société. En particulier, à la faveur de la tendance à la « gestionnarisation » des publications, l'article de revue savante internationale de premier plan, ainsi que les citations qui lui sont faites par les seuls pairs, sont posés en indicateurs de performance de tout premier plan, « fixant » ainsi les pratiques de recherche et de publication des chercheurs. Cette « fixion » sera d'autant plus marquée que les indicateurs bibliométriques, à l'échelon national, seront intégrés à des processus de financement public de la recherche fondés sur les performances, et que, à l'échelon international, les indicateurs joueront un rôle prépondérant dans l'établissement des rankings des universités ainsi que des benchmarks des systèmes nationaux et régionaux de recherche. Pour autant, des prescriptions politiques sont également édictées, principalement au niveau européen, dans l'optique de la mise en oeuvre, au sein de l'Espace européen de la recherche et, dans une moindre mesure, de l'Espace européen de l'enseignement supérieur, d'une économie de la connaissance compétitive à l'échelon global et, plus particulièrement, d'un « mode 2 » de production des connaissances, qui insistent sur l'importance de davantage valoriser les résultats de la recherche, interdisciplinaire et coopérative, auprès de parties prenantes extra-académiques. En résulte une relation paradoxale entre la tendance à l'exotérisation de la recherche et des publications, et les prescrits de gestionnarisation des publications, ainsi qu'entre les prescriptions qui les sous-tendent respectivement.3. Or l'enquête que nous avons menée auprès des membres de trois sociétés savantes internationales en sciences de la communication montre combien les chercheurs de cette discipline ont désormais bien intégré les critères de qualité promus par les prescrits politiques et managériaux soutenant l'instauration d'une nouvelle « culture de la publication », à la croisée des tendances à la massification, à l'internationalisation et à la gestionnarisation des publications. Pour autant, des entretiens approfondis menés auprès de chercheurs en sciences de la communication actifs en Belgique francophone et néerlandophone n'en révèlent pas moins que ces derniers développent une attitude foncièrement ambivalente envers la culture du « publish or perish » et à l'égard de prescriptions qui sur-valorisent les revues savantes internationales de premier plan, en situation d'évaluation de la recherche et des chercheurs. D'une part, en effet, les chercheurs avec qui nous nous sommes entretenus estiment que la nouvelle culture de la publication joue un rôle bénéfique dans la professionnalisation et dans le développement d'une culture véritablement scientifique dans les sciences de la communication. Partant, la plupart d'entre eux développent des stratégies visant à aligner leurs pratiques de publication sur les prescrits. D'autre part, plusieurs répondants n'en regrettent pas moins le caractère réducteur de la survalorisation des revues savantes internationales de premier plan dans l'évaluation, et souhaitent qu'une plus grande diversité de types de publication soit prise en compte par les évaluateurs. Afin de concilier « qualité prescrite » et « qualité souhaitée » dans la qualité de leur activité effective de publication et dans celle des objets effectivement publiés (« qualité réelle »), il arrive dès lors à ces chercheurs de « bricoler » avec les prescriptions. Par ailleurs, la plupart des répondants, davantage cependant en FédérationWallonie-Bruxelles qu'en Flandre, où le financement public de la recherche est d'ores et déjà fondé en partie sur des indicateurs bibliométriques et revue-métriques, regrettent le manque d'explicite dans la formulation des prescriptions – ces dernières prenant régulièrement la forme de « scripts » plus indirects et/ou implicites, plutôt que de normes et de règles stricto sensu –, ainsi que l'absence de seuil quantitatif minimal à atteindre.4. Il nous semble par conséquent, dans une optique plus normative, que le dépôt systématique des différents types de publication produits par les chercheurs en SHS et en sciences de la communication sur des répertoires numériques institutionnels (Open Access Green) serait de nature à (contribuer à) résoudre le paradoxe des prescriptions en matière de « qualité prescrite », ainsi que l'ambivalence des perceptions des chercheurs en matière de « qualité souhaitée ». En effet, le dépôt des publications sur des répertoires institutionnels ouvre des opportunités inédites de renouveler la conversation savante qui se structure autour des objets publiés, au sein de la communauté argumentative (Kommunikationsgemeinschaft) des pairs, par le biais notamment de la revue par les pairs ouverte et grâce à la possibilité de commenter ad libitum les publications disséminées en Open Access. mais également en rendant les résultats de la recherche aisément accessibles et ré-utilisables par des parties prenantes extra-académiques. Les opportunités liées au dépôt des publications sur des répertoires Open Access (Green), en termes de qualité tant épistémique que pragmatiquede ces dernières, seront d'autant plus fécondes que le dépôt des travaux sur les répertoires institutionnels s'articulera à l'usage, par le chercheur, des instruments idoines, génériques ou dédiés, du Web participatif (Wikis, blogues, micro-blogues, réseaux sociaux, outils de partage de signets et de listes bibliographiques). Par ailleurs, les dépôts numériques fonctionnent désormais en tant qu'« outils de transparence », susceptibles de donner davantage de visibilité à des productions de recherche et des types de publication diversifiés. En situation d'évaluation de la recherche et des chercheurs, le recours aux dépôts institutionnels - pour autant qu'un mandat prescrive le dépôt de tous les travaux produits par les chercheurs de l'institution – permettrait aux évaluateurs de fonder leur jugement sur une gamme plus large et plus représentative de types de publication et de formes de communication en SHS et en sciences de la communication. De plus, grâce à la dissémination en Open Access, en conjonction avec l'usage d'une diversité d'outils du Web participatif, il devient mieux possible de soumettre les différents types de publication archivés et publiés en libre accès à des indicateurs de performance eux-mêmes diversifiés – bibliométriques, mais également « webométriques » et « altmétriques » -, fondés sur les articles plutôt que sur les revues et mieux adaptés à la diversité de leurs impacts, tant au sein qu'en dehors du cercle des pairs.5. Partant, l'Open Access (Green) nous apparaît in fine comme étant doté d'un potentiel important, en matière d'intégration de la recherche et des chercheurs en SHS et en sciences de la communication à la mise en place – au-delà d'une économie de la connaissance - d'une véritable société de la connaissance, ainsi qu'aux processus d'innovation techno-industrielle, sociale et intellectuelle qui la sous-tendent.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.