914 resultados para Open Research Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of research data management infrastructure and services and making research data more discoverable and accessible to the research community is a key priority at the national, state and individual university level. This paper will discuss and reflect upon a collaborative project between Griffith University and the Queensland University of Technology to commission a Metadata Hub or Metadata Aggregation service based upon open source software components. It will describe the role that metadata aggregation services play in modern research infrastructure and argue that this role is a critical one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Queensland University of Technology (QUT) Library is partnering with High Performance Computing (HPC) services and the Division of Research and Commercialisation to develop and deliver a range of integrated research support services and systems designed to enhance the research capabilities of the University. Existing and developing research support services include - support for publishing strategies including open access, bibliographic citation and ranking services, research data management, use of online collaboration tools, online survey tools, quantitative and qualitative data analysis, content management and storage solutions. In order to deliver timely and effective research referral and support services, it is imperative that library staff maintain their awareness of, and develop expertise in new eResearch methods and technologies. ---------- METHODS: In 2009/10 QUT Library initiated an online survey for support staff and researchers and a series of focus groups for researchers aimed at gaining a better understanding of current and future eresearch practices and skills. These would better inform the development of a research skills training program and the development of new research support services. The Library and HPC also implemented a program of seminars and workshops designed to introduce key library staff to a broad range of eresearch concepts and technologies. Feedback was obtained after each training session. A number of new services were implemented throughout 2009 and 2010. ---------- RESULTS: Key findings of the survey and focus groups are related to the development of the staff development program. Feedback from program attendees is provided and evaluated. The staff development program is assessed in terms of its success to support the implementation of new research support services. --------- CONCLUSIONS QUT Library has embarked on an ambitious awareness and skills development program to assist Library staff transition a period of rapid change and broadening scope for the Library. Successes and challenges of the program are discussed. A number of recommendations are made in retrospect and also looking forward to the future training needs of Library staff to support the University’s future research goals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Researchers are increasingly involved in data-intensive research projects that cut across geographic and disciplinary borders. Quality research now often involves virtual communities of researchers participating in large-scale web-based collaborations, opening their earlystage research to the research community in order to encourage broader participation and accelerate discoveries. The result of such large-scale collaborations has been the production of ever-increasing amounts of data. In short, we are in the midst of a data deluge. Accompanying these developments has been a growing recognition that if the benefits of enhanced access to research are to be realised, it will be necessary to develop the systems and services that enable data to be managed and secured. It has also become apparent that to achieve seamless access to data it is necessary not only to adopt appropriate technical standards, practices and architecture, but also to develop legal frameworks that facilitate access to and use of research data. This chapter provides an overview of the current research landscape in Australia as it relates to the collection, management and sharing of research data. The chapter then explains the Australian legal regimes relevant to data, including copyright, patent, privacy, confidentiality and contract law. Finally, this chapter proposes the infrastructure elements that are required for the proper management of legal interests, ownership rights and rights to access and use data collected or generated by research projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in information and communication technologies have brought about an information revolution, leading to fundamental changes in the way that information is collected or generated, shared and distributed. The importance of establishing systems in which research findings can be readily made available to and used by other researchers has long been recognized in international scientific collaborations. If the data access principles adopted by international scientific collaborations are to be effectively implemented they must be supported by the national policies and laws in place in the countries in which participating researchers are operating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While undertaking the ANDS RDA Gold Standard Record Exemplars project, research data sharing was discussed with many QUT researchers. Our experiences provided rich insight into researcher attitudes towards their data and the sharing of such data. Generally, we found traditional altruistic motivations for research data sharing did not inspire researchers, but an explanation of the more achievement-oriented benefits were more compelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) in Brisbane, Australia, is involved in a number of projects funded by the Australian National Data Service (ANDS). Currently, QUT is working on a project (Metadata Stores Project) that uses open source VIVO software to aid in the storage and management of metadata relating to data sets created/managed by the QUT research community. The registry (called QUT Research Data Finder) will support the sharing and reuse of research datasets, within and external to QUT. QUT uses VIVO for both the display and the editing of research metadata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2012 the existing eight disciplines of Creative Industries Faculty, QUT combined with the School of Design (formerly a component of the Faculty of Built Environment and Engineering) to create a super faculty that includes the following disciplines: Architecture, Creative Writing & Literary Studies, Dance, Drama, Fashion, Film & Television, Industrial Design, Interior Design, Journalism, Media & Communication, Landscape Architecture, Music & Sound and Urban Design. The university’s research training unit AIRS (Advanced Information Retrieval Skills) is a systematic introduction to research level information literacies. It is currently being redesigned to reflect today’s new data intensive research environment and facilitate the capacity for life-long learning. Upon completion participants are expected to be able to: 1. Demonstrate an understanding of the theory of advanced search and evaluative strategies to efficiently yield appropriate resources to create original research. 2. Apply appropriate data management strategies to organise and utilize your information proficiently, ethically and legally. 3. Identify strategies to ensure best practice in the use of information sources, information technologies, information access tools and investigative methods. All Creative Industries Faculty research students must complete this unit into which CI Librarians teach discipline specific material. The library employs a team of research specific experts as well as Liaison Librarians for each faculty. Together they develop and deliver a generic research training program that provides researcher training in the following areas: Managing Research Data, QUT ePrints: New features for tracking your research impact, Tracking Research Impact, Research Students and the Library: Overview of Library Research Support Services, Technologies for Research Collaboration, Open Access Publishing, Greater Impact via Creative Commons Licence, CAMBIA - Navigating the patent literature, Uploading Publications to QUT ePrints Workshop, AIRS for supervisors, Finding Existing Research Data, Keeping up to date:Discovering and managing current awareness information and Getting Published. In 2011 Creative Industries initiated a new faculty specific research training program to promote capacity building for research within their Faculty, with workshops designed and developed with Faculty Research Leaders, The Office of Research and Liaison Librarians. “Show me the money” which assists staff to pursue alternative funding sources was one such session that was well attended and generated much discussion and interest. Drop in support sessions for ePrints, EndNote referencing software and Tracking Research Impact for the Creative Industries were also popular options on the menu. Liaison Librarians continue to provide one-on-one consultations with individual researchers as requested. This service assists Librarians greatly with getting to know and monitoring their researchers’ changing needs. The CI Faculty has enlisted two Research Leaders, one for each of the two Schools (Design and Media, Entertainment & Creative Arts) whose role it is to mentor newer research staff. Similarly within the CI library liaison team one librarian is assigned the role of Research Coordinator, whose responsibility it is to be the primary liaison with the Assistant Dean, Research and other key Faculty research managers and is the one most likely to attend Faculty committees and meetings relating to research support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing and maintaining a successful institutional repository for research publications requires a considerable investment by the institution. Most of the money is spent on developing the skill-sets of existing staff or hiring new staff with the necessary skills. The return on this investment can be magnified by using this valuable infrastructure to curate collections of other materials such as learning objects, student work, conference proceedings and institutional or local community heritage materials. When Queensland University of Technology (QUT) implemented its repository for research publications (QUT ePrints) over 11 years ago, it was one of the first institutional repositories to be established in Australia. Currently, the repository holds over 29,000 open access research publications and the cumulative total number of full-text downloads for these document now exceeds 16 million. The full-text deposit rate for recently-published peer reviewed papers (currently over 74%) shows how well the repository has been embraced by QUT researchers. The success of QUT ePrints has resulted in requests to accommodate a plethora of materials which are ‘out of scope’ for this repository. QUT Library saw this as an opportunity to use its repository infrastructure (software, technical know-how and policies) to develop and implement a metadata repository for its research datasets (QUT Research Data Finder), a repository for research-related software (QUT Software Finder) and to curate a number of digital collections of institutional and local community heritage materials (QUT Digital Collections). This poster describes the repositories and digital collections curated by QUT Library and outlines the value delivered to the institution, and the wider community, by these initiatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was undertaken by UKOLN on behalf of the Joint Information Systems Committee (JISC) in the period April to September 2008. Application profiles are metadata schemata which consist of data elements drawn from one or more namespaces, optimized for a particular local application. They offer a way for particular communities to base the interoperability specifications they create and use for their digital material on established open standards. This offers the potential for digital materials to be accessed, used and curated effectively both within and beyond the communities in which they were created. The JISC recognized the need to undertake a scoping study to investigate metadata application profile requirements for scientific data in relation to digital repositories, and specifically concerning descriptive metadata to support resource discovery and other functions such as preservation. This followed on from the development of the Scholarly Works Application Profile (SWAP) undertaken within the JISC Digital Repositories Programme and led by Andy Powell (Eduserv Foundation) and Julie Allinson (RRT UKOLN) on behalf of the JISC. Aims and Objectives 1.To assess whether a single metadata AP for research data, or a small number thereof, would improve resource discovery or discovery-to-delivery in any useful or significant way. 2.If so, then to:a.assess whether the development of such AP(s) is practical and if so, how much effort it would take; b.scope a community uptake strategy that is likely to be successful, identifying the main barriers and key stakeholders. 3.Otherwise, to investigate how best to improve cross-discipline, cross-community discovery-to-delivery for research data, and make recommendations to the JISC and others as appropriate. Approach The Study used a broad conception of what constitutes scientific data, namely data gathered, collated, structured and analysed using a recognizably scientific method, with a bias towards quantitative methods. The approach taken was to map out the landscape of existing data centres, repositories and associated projects, and conduct a survey of the discovery-to-delivery metadata they use or have defined, alongside any insights they have gained from working with this metadata. This was followed up by a series of unstructured interviews, discussing use cases for a Scientific Data Application Profile, and how widely a single profile might be applied. On the latter point, matters of granularity, the experimental/measurement contrast, the quantitative/qualitative contrast, the raw/derived data contrast, and the homogeneous/heterogeneous data collection contrast were discussed. The Study report was loosely structured according to the Singapore Framework for Dublin Core Application Profiles, and in turn considered: the possible use cases for a Scientific Data Application Profile; existing domain models that could either be used or adapted for use within such a profile; and a comparison existing metadata profiles and standards to identify candidate elements for inclusion in the description set profile for scientific data. The report also considered how the application profile might be implemented, its relationship to other application profiles, the alternatives to constructing a Scientific Data Application Profile, the development effort required, and what could be done to encourage uptake in the community. The conclusions of the Study were validated through a reference group of stakeholders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authority files serve to uniquely identify real world ‘things’ or entities like documents, persons, organisations, and their properties, like relations and features. Already important in the classical library world, authority files are indispensable for adequate information retrieval and analysis in the computer age. This is because, even more than humans, computers are poor at handling ambiguity. Through authority files, people tell computers which terms, names or numbers refer to the same thing or have the same meaning by giving equivalent notions the same identifier. Thus, authority files signpost the internet where these identifiers are interlinked on the basis of relevance. When executing a query, computers are able to navigate from identifier to identifier by following these links and collect the queried information on these so-called ‘crosswalks’. In this context, identifiers also go under the name controlled access points. Identifiers become even more crucial now massive data collections like library catalogues or research datasets are releasing their till-now contained data directly to the internet. This development is coined Open Linked Data. The concatenating name for the internet is Web of Data instead of the classical Web of Documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research, which focused on the Irish adult population, was to generate information for policymakers by applying statistical analyses and current technologies to oral health administrative and survey databases. Objectives included identifying socio-demographic influences on oral health and utilisation of dental services, comparing epidemiologically-estimated dental treatment need with treatment provided, and investigating the potential of a dental administrative database to provide information on utilisation of services and the volume and types of treatment provided over time. Information was extracted from the claims databases for the Dental Treatment Benefit Scheme (DTBS) for employed adults and the Dental Treatment Services Scheme (DTSS) for less-well-off adults, the National Surveys of Adult Oral Health, and the 2007 Survey of Lifestyle Attitudes and Nutrition in Ireland. Factors associated with utilisation and retention of natural teeth were analysed using count data models and logistic regression. The chi-square test and the student’s t-test were used to compare epidemiologically-estimated need in a representative sample of adults with treatment provided. Differences were found in dental care utilisation and tooth retention by Socio-Economic Status. An analysis of the five-year utilisation behaviour of a 2003 cohort of DTBS dental attendees revealed that age and being female were positively associated with visiting annually and number of treatments. Number of adults using the DTBS increased, and mean number of treatments per patient decreased, between 1997 and 2008. As a percentage of overall treatments, restorations, dentures, and extractions decreased, while prophylaxis increased. Differences were found between epidemiologically-estimated treatment need and treatment provided for those using the DTBS and DTSS. This research confirms the utility of survey and administrative data to generate knowledge for policymakers. Public administrative databases have not been designed for research purposes, but they have the potential to provide a wealth of knowledge on treatments provided and utilisation patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To make full use of research data, the bioscience community needs to adopt technologies and reward mechanisms that support interoperability and promote the growth of an open 'data commoning' culture. Here we describe the prerequisites for data commoning and present an established and growing ecosystem of solutions using the shared 'Investigation-Study-Assay' framework to support that vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To outline the importance of the clarity of data analysis in the doing and reporting of interview-based qualitative research.

Approach: We explore the clear links between data analysis and evidence. We argue that transparency in the data analysis process is integral to determining the evidence that is generated. Data analysis must occur concurrently with data collection and comprises an ongoing process of 'testing the fit' between the data collected and analysis. We discuss four steps in the process of thematic data analysis: immersion, coding, categorising and generation of themes.

Conclusion: Rigorous and systematic analysis of qualitative data is integral to the production of high-quality research. Studies that give an explicit account of the data analysis process provide insights into how conclusions are reached while studies that explain themes anchored to data and theory produce the strongest evidence.