26 resultados para research data management
Resumo:
On 23-24 September 2009 an international discussion workshop on “Main Drivers for Successful Re-Use of Research Data” was held in Berlin, prepared and organised by the Knowledge Exchange working group on Primary Research Data. The main focus of the workshop was on the benefits, challenges and obstacles of re-using data from a researcher’s perspective. The use cases presented by researchers from a variety of disciplines were supplemented by two key notes and selected presentations by specialists from infrastructure institutions, publishers, and funding bodies on national and European level. Researchers' perspectives The workshop provided a critical evaluation of what lessons have been learned on sharing and re-using research data from a researcher’s perspective and what actions might be taken on to still improve the successful re-use. Despite the individual differences characterising the diverse disciplines it became clear that important issues are comparable. Combine forces to support re-use and sharing of data Apart from several technical challenges such as metadata exchange standards and quality assurance it was obvious that the most important obstacles to re-using research data more efficiently are socially determined. It was agreed that in order to overcome this problem more efforts should be made to rise awareness and combine forces to support re-using and sharing of research data on all levels (researchers, institutions, publishers, funders, governments).
Resumo:
Spreadsheet list of discovered journal research data policies, gathered in 2015 as a part of the JRPDR project. Research conducted for Jisc by Research Consulting and Spotlight Data. Underpins project final report [http://repository.jisc.ac.uk/6264/ ]
Resumo:
Underpins final report at: http://repository.jisc.ac.uk/6264/ Spreadsheet of discovered research data journal policies, collected late 2015.
Resumo:
A report of a joint ARMA, RLUK, RUGIT, SCONUL, UCISA and Jisc workshop that underpins the "Directions in Research Data Management" report. Presentations from the event can be found at: http://www.jisc.ac.uk/events/directions-for-research-data-management-in-uk-universities-06-nov-2014 A blog post about the event can be found at: http://researchdata.jiscinvolve.org/wp/2014/12/04/directions-in-research-data-management/
Resumo:
This report is the second Ithaka S+R / Jisc / RLUK survey of UK academics. It asks of the UK research community their views on resource discovery, their use of these resources (online and digital), attitudes to research data management, and much more. It provides a powerful insight into how researchers view their own behaviour and the research environment within the UK today. It gives us pointers to how we can provide further support to researchers and first indications as to where resources should be best invested in the future.
Resumo:
This study was undertaken by UKOLN on behalf of the Joint Information Systems Committee (JISC) in the period April to September 2008. Application profiles are metadata schemata which consist of data elements drawn from one or more namespaces, optimized for a particular local application. They offer a way for particular communities to base the interoperability specifications they create and use for their digital material on established open standards. This offers the potential for digital materials to be accessed, used and curated effectively both within and beyond the communities in which they were created. The JISC recognized the need to undertake a scoping study to investigate metadata application profile requirements for scientific data in relation to digital repositories, and specifically concerning descriptive metadata to support resource discovery and other functions such as preservation. This followed on from the development of the Scholarly Works Application Profile (SWAP) undertaken within the JISC Digital Repositories Programme and led by Andy Powell (Eduserv Foundation) and Julie Allinson (RRT UKOLN) on behalf of the JISC. Aims and Objectives 1.To assess whether a single metadata AP for research data, or a small number thereof, would improve resource discovery or discovery-to-delivery in any useful or significant way. 2.If so, then to:a.assess whether the development of such AP(s) is practical and if so, how much effort it would take; b.scope a community uptake strategy that is likely to be successful, identifying the main barriers and key stakeholders. 3.Otherwise, to investigate how best to improve cross-discipline, cross-community discovery-to-delivery for research data, and make recommendations to the JISC and others as appropriate. Approach The Study used a broad conception of what constitutes scientific data, namely data gathered, collated, structured and analysed using a recognizably scientific method, with a bias towards quantitative methods. The approach taken was to map out the landscape of existing data centres, repositories and associated projects, and conduct a survey of the discovery-to-delivery metadata they use or have defined, alongside any insights they have gained from working with this metadata. This was followed up by a series of unstructured interviews, discussing use cases for a Scientific Data Application Profile, and how widely a single profile might be applied. On the latter point, matters of granularity, the experimental/measurement contrast, the quantitative/qualitative contrast, the raw/derived data contrast, and the homogeneous/heterogeneous data collection contrast were discussed. The Study report was loosely structured according to the Singapore Framework for Dublin Core Application Profiles, and in turn considered: the possible use cases for a Scientific Data Application Profile; existing domain models that could either be used or adapted for use within such a profile; and a comparison existing metadata profiles and standards to identify candidate elements for inclusion in the description set profile for scientific data. The report also considered how the application profile might be implemented, its relationship to other application profiles, the alternatives to constructing a Scientific Data Application Profile, the development effort required, and what could be done to encourage uptake in the community. The conclusions of the Study were validated through a reference group of stakeholders.
Resumo:
The report provides recommendations to policy makers in science and scholarly research regarding IPR policy to increase the impact of research and make the outcomes more available. The report argues that the impact of publicly-funded research outputs can be increased through a fairer balance between private and public interest in copyright legislation. This will allow for wider access to and easier re-use of published research reports. The common practice of authors being required to assign all rights to a publisher restricts the impact of research outputs and should be replaced by wider use of a non-exclusive licence. Full access and re-use rights to research data should be encouraged through use of a research-friendly licence.
Resumo:
This briefing paper offers insight into various open access business models, from institutional to subject repositories, from open access journals to research data and monographs. This overview shows that there is a considerable variety in business models within a common framework of public funding. Open access through institutional repositories requires funding from particular institutions to set up and maintain a repository, while subject repositories often require contributions from a number of institutions or funding agencies to maintain a subject repository hosted at one institution. Open access through publication in open access journals generally requires a mix of funding sources to meet the cost of publishing. Public or charitable research funding bodies may contribute part of the cost of publishing in an open access journal but institutions also meet part of the cost, particularly when the author does not have a research grant from a research funding body
Resumo:
In contrast to cost modeling activities, the pricing of services must be simple and transparent. Calculating and thus knowing price structures, would not only help identify the level of detail required for cost modeling of individual instititutions, but also help develop a ”public” market for services as well as clarify the division of task and the modeling of funding and revenue streams for data preservation of public institutions. This workshop has built on the results from the workshop ”The Costs and Benefits of Keeping Knowledge” which took place 11 June 2012 in Copenhagen. This expert workshop aimed at: •Identifying ways for data repositories to abstract from their complicated cost structures and arrive at one transparent pricing structure which can be aligned with available and plausible funding schemes. Those repositories will probably need a stable institutional funding stream for data management and preservation. Are there any estimates for this, absolute or as percentage of overall cost? Part of the revenue will probably have to come through data management fees upon ingest. How could that be priced? Per dataset, per GB or as a percentage of research cost? Will it be necessary to charge access prices, as they contradict the open science paradigm? •What are the price components for pricing individual services, which prices are currently being paid e.g. to commercial providers? What are the description and conditions of the service(s) delivered and guaranteed? •What types of risks are inherent in these pricing schemes? •How can services and prices be defined in an all-inclusive and simple manner, so as to enable researchers to apply for specific amount when asking for funding of data-intensive projects?Please
Resumo:
Organised by Knowledge Exchange & the Nordbib programme 11 June 2012, 8:30-12:30, Copenhagen Adjacent to the Nordbib conference 'Structural frameworks for open, digital research' Participants in break out discussion during the workshop on cost modelsThe Knowledge Exchange and the Nordbib programme organised a workshop on cost models for the preservation and management of digital collections. The rapid growth of the digital information which a wide range of institutions must preserve emphasizes the need for robust cost modelling. Such models should enable these institutions to assess both what resources are needed to sustain their digital preservation activities and allow comparisons of different preservation solutions in order to select the most cost-efficient alternative. In order to justify the costs institutions also need to describe the expected benefits of preserving digital information. This workshop provided an overview of existing models and demonstrated the functionality of some of the current cost tools. It considered the specific economic challenges with regard to the preservation of research data and addressed the benefits of investing in the preservation of digital information. Finally, the workshop discussed international collaboration on cost models. The aim of the workshop was to facilitate understanding of the economies of data preservation and to discuss the value of developing an international benchmarking model for the costs and benefits of digital preservation. The workshop took place in the Danish Agency for Culture and was planned directly prior to the Nordbib conference 'Structural frameworks for open, digital research'
Resumo:
Commissioned paper from Cameron Neylon (Curtin University) on citation practices for research data. Includes information on current (2016) global activity in the field, parallels with traditional citation, and recommendations.