892 resultados para national knowledge capital repository


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge Exchange has funded the translation a recommended practice of the National Information Standard Organization (NISO) called SERU: Shared Electronic Resource Understanding. The SERU wording offers publishers and libraries the opportunity to save both the time and the costs associated with a negotiated and signed licence agreement by agreeing to operate within a framework of shared understanding and good faith. The statements in the document provide a set of common understandings for publishers and libraries to reference as an alternative to a formal licence when conducting business. The SERU wording has been translated in three languages of the Knowledge Exchange partners. German organisations are recommended to make use of the English wording

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this brief an explanation is given why Exceptions in copyright legislation are of great importance to the free flow of knowledge, essential to education and research in the European Union. At present the Freedom of access to knowledge for EU citizens is trapped in a complex web of national laws and local licensing arrangements. The current EU copyright law does not enable the vision of either a "Europe of knowledge" in the Bologna Process or of a "unified" European Research Area to be realised. To address this Exceptions and limitations harmonised to fit best practice are required to allow content to move digitally across Member States in support of education, research and libraries. Support for open content licensing by the European Parliament will strengthen authors’ rights, meet the needs of researchers, teachers and learners, and enable the free flow of knowledge in support of the "fifth freedom".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the Berlin7 conference in Paris on 3 December 2009 Knowledge Exchange provided a workshop on the practical challenges to be addressed in moving to Open Access. Presentations where provided by John Houghton and Alma Swan discussing the outcomes of studies on the costs and benefits of Open Access for institutions and the society as a whole. These were followed by presentations by two funding agencies on the results of financing publication costs both at an institutional and national level in Germany. Also the results of the Springer deal in the Netherlands where presented. The third section was focused on the results of implementing mandates both by funding bodies and institutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In June 2009 a study was completed that had been commissioned by Knowledge Exchange and written by Professor John Houghton, Victoria University, Australia. This report on the study was titled: "Open Access – What are the economic benefits? A comparison of the United Kingdom, Netherlands and Denmark." This report was based on the findings of studies in which John Houghton had modelled the costs and benefits of Open Access in three countries. These studies had been undertaken in the UK by JISC, in the Netherlands by SURF and in Denmark by DEFF. In the three national studies the costs and benefits of scholarly communication were compared based on three different publication models. The modelling revealed that the greatest advantage would be offered by the Open Access model, which means that the research institution or the party financing the research pays for publication and the article is then freely accessible. Adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 million in The Netherlands and EUR 480 in the UK. The report concludes that the advantages would not just be in the long term; in the transitional phase too, more open access to research results would have positive effects. In this case the benefits would also outweigh the costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Student Digital Experience Tracker Case Study: Royal National College for the Blind describing their experience of the Tracker pilot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.