999 resultados para Usage Statistics
Resumo:
Directions the outcomes of the OpenAIRE project, which implements the EC Open Access (OA) pilot. Capitalizing on the OpenAIRE infrastructure, built for managing FP7 and ERC funded articles, and the associated supporting mechanism of the European Helpdesk System, OpenAIREplus will “develop an open access, participatory infrastructure for scientific information”. It will significantly expand its base of harvested publications to also include all OA publications indexed by the DRIVER infrastructure (more than 270 validated institutional repositories) and any other repository containing “peer-reviewed literature” that complies with certain standards. It will also generically harvest and index the metadata of scientific datasets in selected diverse OA thematic data repositories. It will support the concept of linked publications by deploying novel services for “linking peer- reviewed literature and associated data sets and collections”, from link discovery based on diverse forms of mining (textual, usage, etc.), to storage, visual representation, and on-line exploration. It will offer both user-level services to experts and “non-scientists” alike as well as programming interfaces for “providers of value-added services” to build applications on its content. Deposited articles and data will be openly accessible through an enhanced version of the OpenAIRE portal, together with any available relevant information on associated project funding and usage statistics. OpenAIREplus will retain its European footprint, engaging people and scientific repositories in almost all 27 EU member states and beyond. The technical work will be complemented by a suite of studies and associated research efforts that will partly proceed in collaboration with “different European initiatives” and investigate issues of “intellectual property rights, efficient financing models, and standards”.
Resumo:
The main objective is to exhibit how usage data from new media can be used to assess areas where students need more help in creating their ETDs. After attending this session, attendees will be able to use usage data from new media, in conjunction with traditional assessment data, to identify strengths and weaknesses in ETD training and resources. The burgeoning ETD program at Florida International University (FIU) has provided many opportunities to experiment with assessment strategies and new media. The usage statistics from YouTube and the ETD LibGuide revealed areas of strength and weakness in the training resources and the overall ETD training initiative. With the ability to assess these materials, they have been updated to better meet student needs. In addition to these assessment tools, there are opportunities to connect these statistics with data from a common error checklist, student feedback from ETD workshops, and final ETD submission surveys to create a full-fledged outcome based assessment program for the ETD initiative.
Resumo:
Access to the Internet has grown exponentially in Latin America over the past decade. The International Telecommunications Union (ITU) estimates that in 2009 there were 144.5 million Internet users in South America, 6.4 million in Central America, and 8.2 million in the Caribbean, or a total 159.2 million users in all of Latin America.1 At that time, ITU reported an estimated 31 million Internet users in Mexico, which would bring the overall number of users in Latin America to 190.2 million people. More recent estimates published by Internet World Stats place Internet access currently at an estimated 204.6 million out of a total population of 592.5 million in the region (this figure includes Mexico).2 According to those figures, 34.5 per cent of the Latin American population now enjoys Internet access. In recent years, universal access policies contributed to the vast increase in digital literacy and Internet use in Argentina, Brazil, Chile, Colombia, and Costa Rica. Whereas the latter was the first country in the region to adopt a policy of universal access, the most expansive and successful digital inclusion programs in the region have taken hold in Brazil and Chile. These two countries have allocated considerable resources to the promotion of digital literacy and Internet access among low income and poor populations; in both cases, civil society groups significantly assisted in the promotion of inclusion at the grassroots level. Digital literacy and Internet access have come to represent, particularly in the area of education, a welcome complementary resource for populations chronically underserved in nations with a long-standing record of inadequate public social services. Digital inclusion is vastly expanding throughout the region, thanks to stabilizing economies, increasingly affordable technology, and the rapid growth in the supply of cellular mobile telephony. A recent study by the global advertising agency Razorfish revealed significant shifts in the demographics of digital inclusion in the major economies of South America, where Web access is rapidly increasing amid the lower middle class and the working poor.3 Several researchers have suggested that Internet access will bring about greater civic participation and engagement, although skeptics remain unsure this could happen in Latin America. Yet, there have been some recent instances of political mobilization facilitated through the use of the Web and social media applications, starting in Chile when “smart mobs” nationwide demonstrated against former Chilean President Michelle Bachelet when she failed to enact education reforms in May 2006. The Internet has also been used by marginalized groups and by guerrillas groups to highlight their stories. In sum, Internet access in Latin is no longer a medium restricted to the elite. It is rather a public sphere upon which civil society has staked its claim. Some of the examples noted in this study point toward a developing trend whereby civil society, through online grassroots movements, is able to effectively pressure public officials, instill transparency and demand accountability in government. Access to the Internet has also made it possible for voices on the margins to participate in the conversation in a way that was never previously feasible. 1 International Telecommunications Union [ITU], “Information Technology Public & Report,” accessed May 15, 2011, http://www.itu.int/. 2 Internet World Stats, “Internet Usage Statistics for the Americas,” accessed March 24, 2011, http://www.internetworldstats.com/stats2.htm 3 J. Crump, “The finch and the fox,” London, UK (2010), http://www.slideshare.net/razorfishmarketing/the-finch-and-the-fox.
Resumo:
This paper presents the Accurate Google Cloud Simulator (AGOCS) – a novel high-fidelity Cloud workload simulator based on parsing real workload traces, which can be conveniently used on a desktop machine for day-to-day research. Our simulation is based on real-world workload traces from a Google Cluster with 12.5K nodes, over a period of a calendar month. The framework is able to reveal very precise and detailed parameters of the executed jobs, tasks and nodes as well as to provide actual resource usage statistics. The system has been implemented in Scala language with focus on parallel execution and an easy-to-extend design concept. The paper presents the detailed structural framework for AGOCS and discusses our main design decisions, whilst also suggesting alternative and possibly performance enhancing future approaches. The framework is available via the Open Source GitHub repository.
Resumo:
The workshop took place on 16-17 January in Utrecht, with Seventy experts from eight European countries in attendance. The workshop was structured in six sessions: usage statistics research paper metadata exchanging information author identification Open Archives Initiative eTheses Following the workshop, the discussion groups were asked to continue their collaboration and to produce a report for circulation to all participants. The results can be downloaded below. The recommendations contained in the reports above have been reviewed by the Knowledge Exchange partner organisations and formed the basis for new proposals and the next steps in Knowledge Exchange work with institutional repositories. Institutional Repository Workshop - Next steps During April and May 2007 Knowledge Exchange had expert reviewers from the partner organisations go though the workshop strand reports and make their recommendations about the best way to move forward, to set priorities, and find possibilities for furthering the institutional repository cause. The KE partner representatives reviewed the reviews and consulted with their partner organisation management to get an indication of support and funding for the latest ideas and proposals, as follows: Pragmatic interoperability During a review meeting at JISC offices in London on 31 May, the expert reviewers and the KE partner representatives agreed that ‘pragmatic interoperability' is the primary area of interest. It was also agreed that the most relevant and beneficial choice for a Knowledge Exchange approach would be to aim for CRIS-OAR interoperability as a step towards integrated services. Within this context, interlinked joint projects could be undertaken by the partner organisations regarding the areas that most interested them. Interlinked projects The proposed Knowledge Exchange activities involve interlinked joint projects on metadata, persistent author identifiers, and eTheses which are intended to connect to and build on projects such as ISPI, Jisc NAMES and the Digital Author Identifier (DAI) developed by SURF. It is important to stress that the projects are not intended to overlap, but rather to supplement the DRIVER 2 (EU project) approaches. Focus on CRIS and OAR It is believed that the focus on practical interoperability between Current Research Information Systems and Open Access Repository systems will be of genuine benefit to research scientists, research administrators and librarian communities in the Knowledge Exchange countries; accommodating the specific needs of each group. Timing June 2007: Write the draft proposal by KE Working Group members July 2007: Final proposal to be sent to partner organisations by KE Group August 2007: Decision by Knowledge Exchange partner organisations.
Resumo:
On 24 September 2010 Knowledge Exchange organised a workshop in Glasgow focusing on how usage statistics can or cannot be used as a basis for managerial decisions on licences. Examples of projects were presented on how usage statistics are used for defining strategies. Usage portals developed in the UK and Germany were demonstrated. During the afternoon a session took place on the sharing of statistical information regarding e-journals. Questions regarding the relevance of international comparisons, privacy and non-disclosure were discussed. This workshop follows on earlier Knowledge Exchange workshops on Usage Statistics and their outcomes, including a briefing paper Combined Usage Statistics as the basis for Research Intelligence.
Resumo:
Peer-reviewed
Resumo:
The main objective for this degree project is to implement an Application Availability Monitoring (AAM) system named Softek EnView for Fujitsu Services. The aim of implementing the AAM system is to proactively identify end user performance problems, such as application and site performance, before the actual end users experience them. No matter how well applications and sites are designed and nomatter how well they meet business requirements, they are useless to the end users if the performance is slow and/or unreliable. It is important for the customers to find out whether the end user problems are caused by the network or application malfunction. The Softek EnView was comprised of the following EnView components: Robot, Monitor, Reporter, Collector and Repository. The implemented system, however, is designed to use only some of these EnView elements: Robot, Reporter and depository. Robots can be placed at any key user location and are dedicated to customers, which means that when the number of customers increases, at the sametime the amount of Robots will increase. To make the AAM system ideal for the company to use, it was integrated with Fujitsu Services’ centralised monitoring system, BMC PATROL Enterprise Manager (PEM). That was actually the reason for deciding to drop the EnView Monitor element. After the system was fully implemented, the AAM system was ready for production. Transactions were (and are) written and deployed on Robots to simulate typical end user actions. These transactions are configured to run with certain intervals, which are defined collectively with customers. While they are driven against customers’ applicationsautomatically, transactions collect availability data and response time data all the time. In case of a failure in transactions, the robot immediately quits the transactionand writes detailed information to a log file about what went wrong and which element failed while going through an application. Then an alert is generated by a BMC PATROL Agent based on this data and is sent to the BMC PEM. Fujitsu Services’ monitoring room receives the alert, reacts to it according to the incident management process in ITIL and by alerting system specialists on critical incidents to resolve problems. As a result of the data gathered by the Robots, weekly reports, which contain detailed statistics and trend analyses of ongoing quality of IT services, is provided for the Customers.
Resumo:
Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.
Resumo:
Mode of access: Internet.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Objectives: To compare the recognized defined daily dose per 100 bed-days (DDD/100 bed-days) measure with the defined daily dose per finished consultant episode (DDD/FCE) in a group of hospitals with a variety of medicines management strategies. To compare antibiotic usage using the above indicators in hospitals with and without electronic prescribing systems. Methods: Twelve hospitals were used in the study. Nine hospitals were selected and split into three cohorts (three high-scoring, three medium-scoring and three low-scoring) by their 2001 medicines management self-assessment scores (MMAS). An additional cohort of three electronic prescribing hospitals was included for comparison. MMAS were compared to antibiotic management scores (AMS) developed from a questionnaire relating specifically to control of antibiotics. FCEs and occupied bed-days were obtained from published statistics and statistical analyses of the DDD/100 bed-days and DDD/FCE were carried out using SPSS. Results: The DDD/100 bed-days varied from 81.33 to 189.37 whilst the DDD/FCE varied from 2.88 to 7.43. The two indicators showed a high degree of correlation with r = 0.74. MMAS were from 9 to 22 (possible range 0-23) and the AMS from 2 to 13 (possible range 0-22). The two scores showed a high degree of correlation with r = 0.74. No correlation was established between either indicator and either score. Conclusions: The WHO indicator for medicines utilization, DDD/100 bed-days, exhibited the same level of conformity as that exhibited from the use of the DDD/FCE indicating that the DDD/FCE is a useful additional indicator for identifying hospitals which require further study. The MMAS can be assumed to be an accurate guide to antibiotic medicines management controls. No relationship has been found between a high degree of medicines management control and the quantity of antibiotic prescribed. © The British Society for Antimicrobial Chemotherapy; 2004 all rights reserved.
Resumo:
The consumption of dietary supplements is highest among athletes and it can represent potential a health risk for consumers. The aim of this study was to determine the prevalence of consumption of dietary supplements by road runners. We interviewed 817 volunteers from four road races in the Brazilian running calendar. The sample consisted of 671 male and 146 female runners with a mean age of 37.9 ± 12.4 years. Of the sample, 28.33% reported having used some type of dietary supplement. The main motivation for this consumption is to increase in stamina and improve performance. The probability of consuming dietary supplements increased 4.67 times when the runners were guided by coaches. The consumption of supplements was strongly correlated (r = 0.97) with weekly running distance, and also highly correlated (r = 0.86) with the number of years the sport had been practiced. The longer the runner had practiced the sport, the higher the training volume and the greater the intake of supplements. The five most frequently cited reasons for consumption were: energy enhancement (29.5%), performance improvement (17.1%), increased level of endurance (10.3%), nutrient replacement (11.1%), and avoidance of fatigue (10.3%). About 30% of the consumers declared more than one reason for taking dietary supplements. The most consumed supplements were: carbohydrates (52.17%), vitamins (28.70%), and proteins (13.48%). Supplement consumption by road runners in Brazil appeared to be guided by the energy boosting properties of the supplement, the influence of coaches, and the experience of the user. The amount of supplement intake seemed to be lower among road runners than for athletes of other sports. We recommend that coaches and nutritionists emphasise that a balanced diet can meet the needs of physically active people.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
The existence of juxtaposed regions of distinct cultures in spite of the fact that people's beliefs have a tendency to become more similar to each other's as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors' opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L(2) whereas the size of the largest cluster grows with ln L(2). The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model-Axelrod's model-we found that these opinion domains are unstable to the effect of a thermal-like noise.