923 resultados para Knowledge Access
Resumo:
The workshop took place on 16-17 January in Utrecht, with Seventy experts from eight European countries in attendance. The workshop was structured in six sessions: usage statistics research paper metadata exchanging information author identification Open Archives Initiative eTheses Following the workshop, the discussion groups were asked to continue their collaboration and to produce a report for circulation to all participants. The results can be downloaded below. The recommendations contained in the reports above have been reviewed by the Knowledge Exchange partner organisations and formed the basis for new proposals and the next steps in Knowledge Exchange work with institutional repositories. Institutional Repository Workshop - Next steps During April and May 2007 Knowledge Exchange had expert reviewers from the partner organisations go though the workshop strand reports and make their recommendations about the best way to move forward, to set priorities, and find possibilities for furthering the institutional repository cause. The KE partner representatives reviewed the reviews and consulted with their partner organisation management to get an indication of support and funding for the latest ideas and proposals, as follows: Pragmatic interoperability During a review meeting at JISC offices in London on 31 May, the expert reviewers and the KE partner representatives agreed that ‘pragmatic interoperability' is the primary area of interest. It was also agreed that the most relevant and beneficial choice for a Knowledge Exchange approach would be to aim for CRIS-OAR interoperability as a step towards integrated services. Within this context, interlinked joint projects could be undertaken by the partner organisations regarding the areas that most interested them. Interlinked projects The proposed Knowledge Exchange activities involve interlinked joint projects on metadata, persistent author identifiers, and eTheses which are intended to connect to and build on projects such as ISPI, Jisc NAMES and the Digital Author Identifier (DAI) developed by SURF. It is important to stress that the projects are not intended to overlap, but rather to supplement the DRIVER 2 (EU project) approaches. Focus on CRIS and OAR It is believed that the focus on practical interoperability between Current Research Information Systems and Open Access Repository systems will be of genuine benefit to research scientists, research administrators and librarian communities in the Knowledge Exchange countries; accommodating the specific needs of each group. Timing June 2007: Write the draft proposal by KE Working Group members July 2007: Final proposal to be sent to partner organisations by KE Group August 2007: Decision by Knowledge Exchange partner organisations.
Resumo:
Working Together to Promote Open Access Policy Alignment in Europe
Resumo:
The Knowledge Exchange Licensing Expert Group has commissioned a study which examined which licences or licence provisions are being used by open access and hybrid publishers when making their publications available in open access. The study intended to identify a ‘best practice’ licence model framework and to formulate recommendations with respect to an OA licence structure, taking into account the commercial and non-commercial needs of authors as well as the publishers. The study was undertaken by Maverick Outsource Services Ltd. This led to the following recommendations for an optimum licence in an open access journal: The author retains copyright; The author or rightholder grants to all users a free, irrevocable, worldwide, perpetual right of access to the work, and besides a licence to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works in any digital medium for any reasonable purpose, subject to proper attribution of authorship; A complete version of the work and all supplemental materials is deposited immediately upon initial publication in at least one online repository. This includes the permission as stated under 2, in a suitable standard electronic format; The copyright holder can retain the right to restrict commercial use if they wish; The copyright holder provides the publisher with permission to publish, subject to open accessibility of the work in online published form.
Resumo:
Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.
Resumo:
Knowledge Exchange examined different routes in achieving the vision of 'having a layer of scholarly and scientific content openly available in the internet'. One of these routes involves exploring new developments in the future of publishing. Work is being undertaken investigating interesting alternative business models which could contribute to the transition to open access. In this light KE has commissioned a study investigating whether submission fees could play a role in a business model for Open Access journals. The general conclusion of the report bearing the title ‘Submission Fees a tool in the transition to open access?', written by Mark Ware, is that there are benefits to publishers in certain cases to switch to a model in which an author pays a fee when submitting an article. Especially journals with a high rejection rate might be interested in combining submission fees with article processing charges in order to make the transition to open access easier. In certain disciplines, notably economic and finance journals and in some areas of the experimental life sciences, submission fees are already common. Overall there seems to be an interest in the model but the risks, particularly those involved in any transition, are seen by the publishers to outweigh the perceived benefits. There is also a problem in that the advantages offered by submission fees are often general benefits that might improve the system but do not provide publishers and authors with direct incentives to change to open access. To support transition funders, institutions and publication funds could make it clear that submission fees would be an allowable cost. At present this is often unclear in their policies. Author acceptance of submission fees is critical to its success. It is an observable fact that authors will accept them in some circumstances. Author acceptance would require further study though. Based on the interviews and the modelling in the study one model in particular is regarded as the most suitable way to meet the current requirements (i.e. to strengthen open access to research publications). In this model authors pay a submission fee plus an Article Processing Fee and the article is subsequently made available in open access. Both fees are set at levels that balance acceptability with the author community with securing a meaningful mix of revenues for the Publisher.
Resumo:
At the Berlin7 conference in Paris on 3 December 2009 Knowledge Exchange provided a workshop on the practical challenges to be addressed in moving to Open Access. Presentations where provided by John Houghton and Alma Swan discussing the outcomes of studies on the costs and benefits of Open Access for institutions and the society as a whole. These were followed by presentations by two funding agencies on the results of financing publication costs both at an institutional and national level in Germany. Also the results of the Springer deal in the Netherlands where presented. The third section was focused on the results of implementing mandates both by funding bodies and institutions.
Resumo:
In June 2009 a study was completed that had been commissioned by Knowledge Exchange and written by Professor John Houghton, Victoria University, Australia. This report on the study was titled: "Open Access – What are the economic benefits? A comparison of the United Kingdom, Netherlands and Denmark." This report was based on the findings of studies in which John Houghton had modelled the costs and benefits of Open Access in three countries. These studies had been undertaken in the UK by JISC, in the Netherlands by SURF and in Denmark by DEFF. In the three national studies the costs and benefits of scholarly communication were compared based on three different publication models. The modelling revealed that the greatest advantage would be offered by the Open Access model, which means that the research institution or the party financing the research pays for publication and the article is then freely accessible. Adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 million in The Netherlands and EUR 480 in the UK. The report concludes that the advantages would not just be in the long term; in the transitional phase too, more open access to research results would have positive effects. In this case the benefits would also outweigh the costs.
Resumo:
Over the past few years many studies have been published on the costs and economic benefits of journal business models. Early studies considered only the costs incurred in publishing traditional journals made available for purchase on a subscription or licensing business model. As the open access business model became available, some studies also covered the cost of making research articles available in open access journals. More recent studies have taken a broader perspective, looking at the position of journal publishers in the market and their business models in the context of the economic benefits from research dissemination. This briefing paper also looks at the outcomes of the broadly cited RIN study and various national studies performed by John Houghton. All links provided in footnotes in this Briefing Paper are to studies available in open access.
Resumo:
Phase 4: Review of the conditions under which individual services and platforms can be sustained On Tuesday 1 October 2013, in Bristol, United Kingdom, Knowledge Exchange brought together a group of international Open Access Service providers to discuss the sustainability of their services. A number of recurring lessons learned were mentioned; Though project funding can be used to start up a service, it does not guarantee the continuation of a service and it can be hard to establish the service as a viable entity, standing on its own feet. Research funders should be aware that if they have policies or mandates for making research outputs available they will eventually also be responsible for on-going support for the underlying infrastructure. At present some services are used globally but the costs are only covered by a limited geographic spread, sometimes only a number of institutions or only one country. Finding other funding sources can be challenging. Various routes were mentioned including commercial partnerships, memberships, offering additional paid services or using a Freemium model. There is not one model that will fit all. As more services turn to library sponsorship to sustain them, one strategy might be to bundle the requests and approach a group of research and infrastructure funders or institutions (and others) with a package rather than each service going through the same resource consuming process of soliciting funding. This will also allow the community to identify gaps, dependencies and overlap in the services. The possibility of setting up an organisation to bundle the services was discussed and a number of risks were identified.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
Leprosy is a chronic infectious disease caused by Mycobacterium leprae. It is known for its great disfiguring capacity and is considered an extremely serious disease to public health worldwide. The state of Ceará ranks 13th in number of cases of leprosy in Brazil, and fourth in Northeastern region, with an average of 2,149 new cases diagnosed every year. This study aimed to evaluate the knowledge of leprosy patients regarding treatment, and to assess the level of treatment adherence and its possible barriers. The study was conducted in the reference center for dermatology, from September 2010 to October 2010, in Fortaleza, Ceará. The study data were collected by means of a structured interview, along with the Morisky-Green test, in order to assess treatment adherence and barriers to adherence. A total of 70 patients were interviewed, out of whom 66 were new cases. The majority of patients were between 42 and 50 years old, and 37 (52.9%) were male. Most patients were clinically classified as presentingmultibacillary leprosy (80%), and 78.6% of them were from Fortaleza, Brazil. The Morisky-Green test indicated that 62.9% of patients presented a low level of adherence (p < 0.005), despite claiming to aware of the disease risks. However, it was observed that 57.1% of the patients had no difficulty adhering to treatment, while 38.6% reported little difficulty. This study shows that despite the patients claiming to be familiar with leprosy and its treatment, the Morisky-Green test clearly demonstrated that they actually were not aware of the principles of therapy, which is evidenced by the low degree of treatment adherence
Resumo:
Purpose – The purpose of this research is to show how the self-archiving of journal papers is a major step towards providing open access to research. However, copyright transfer agreements (CTAs) that are signed by an author prior to publication often indicate whether, and in what form, self-archiving is allowed. The SHERPA/RoMEO database enables easy access to publishers' policies in this area and uses a colour-coding scheme to classify publishers according to their self-archiving status. The database is currently being redeveloped and renamed the Copyright Knowledge Bank. However, it will still assign a colour to individual publishers indicating whether pre-prints can be self-archived (yellow), post-prints can be self-archived (blue), both pre-print and post-print can be archived (green) or neither (white). The nature of CTAs means that these decisions are rarely as straightforward as they may seem, and this paper describes the thinking and considerations that were used in assigning these colours in the light of the underlying principles and definitions of open access. Approach – Detailed analysis of a large number of CTAs led to the development of controlled vocabulary of terms which was carefully analysed to determine how these terms equate to the definition and “spirit” of open access. Findings – The paper reports on how conditions outlined by publishers in their CTAs, such as how or where a paper can be self-archived, affect the assignment of a self-archiving colour to the publisher. Value – The colour assignment is widely used by authors and repository administrators in determining whether academic papers can be self-archived. This paper provides a starting-point for further discussion and development of publisher classification in the open access environment.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.