925 resultados para Research Infrastructure
Resumo:
The development of research data management infrastructure and services and making research data more discoverable and accessible to the research community is a key priority at the national, state and individual university level. This paper will discuss and reflect upon a collaborative project between Griffith University and the Queensland University of Technology to commission a Metadata Hub or Metadata Aggregation service based upon open source software components. It will describe the role that metadata aggregation services play in modern research infrastructure and argue that this role is a critical one.
Resumo:
This paper describes the scientific aims and potentials as well as the preliminary technical design of IRIDE, an innovative tool for multi-disciplinary investigations in a wide field of scientific, technological and industrial applications. IRIDE will be a high intensity "particles factory", based on a combination of high duty cycle radio-frequency superconducting electron linacs and of high energy lasers. Conceived to provide unique research possibilities for particle physics, for condensed matter physics, chemistry and material science, for structural biology and industrial applications, IRIDE will open completely new research possibilities and advance our knowledge in many branches of science and technology. IRIDE is also supposed to be realized in subsequent stages of development depending on the assigned priorities. © 2013 Elsevier B.V.
Resumo:
In this position paper, we describe the current status and plans for a Swiss National Research Infrastructure. Swiss academic and research institutions are very autonomous. While being loosely coupled, they do not rely on any centralized management entities. A coordinated national research infrastructure can only be established by federating the local resources of the individual institutions. We discuss current efforts and business models for a federated infrastructure.
Resumo:
This document is summarizing a major part of the work performed by the FP7-JERICO consortium, including 27 partner institutions, during 4 years (2011-2015). Its objective is to propose a strategy for the European coastal observation and monitoring. To do so we give an overview of the main achievements of the FP7-JERICO project. From this overview, gaps are analysed to draw some recommendations for the future. Overview, gaps and recommendation are addressed at both Hardware and Software levels of the JERICO Research Infrastructure. The main part of the document is built upon this analysis to outcome a general strategy for the future, giving priorities to be targeted and some possible funding mechanisms, but also upon discussions held in dedicated JERICO strategy workshops. This document was initiated in 2014 by the coordination team but considering the fact that an overview of the entire project and its achievement were needed to feed this strategy deliverable it couldn’t ended before the end of FP7-JERICO, April 2015. The preparation of the JERICO-NEXT proposal in summer 2014 to answer an H2020 call for proposals pushed the consortium ahead, fed deep thoughts about this strategy but the intention was to not propose a strategy only bounded by the JERICO-NEXT answer. Authors are conscious that writing JERICO-NEXT is even drawing a bias in the thoughts and they tried to be opened. Nevertheless, comments are always welcome to go farther ahead. Structure of the document The Chapter 3 introduces the need of sustained coastal observatories, from different point of view including a short description of the FP7-JERICO project. In Chapter 4, an analysis of the JERICO coastal observatory Hardware (platforms and sensors) in terms of Status at the end of JERICO, identified gaps and recommendations for further development is provided region by region. The main challenges that remain to be overcome is also summarized. Chapter 5 is dedicated the JERICO infrastructure Software (calibration, operation, quality assessment, data management) and the progress made through JERICO on harmonization of procedures and definition of best practices. Chapter 6 provides elements of a strategy towards sustainable and integrated coastal observations for Europe, drawing a roadmap for cost-effective scientific-based consolidation of the present infrastructure while maximizing the potential arising from JERICO in terms of innovation, wealth-creation, and business development. After reading the chapter 3, for who doesn’t know JERICO, any chapter can be read independently. More details are available in the JERICO final reports and its intermediate reports; all are available on the JERICO web site (www.jerico-FP7.eu) as well as any deliverable. Each chapter will list referring JERICO documents. A small bibliographic list is available at the end of this deliverable.
Resumo:
Queensland University of Technology (QUT) is a large multidisciplinary university located in Brisbane, Queensland, Australia. QUT is increasing its research focus and is developing its research support services. It has adopted a model of collaboration between the Library, High Performance Computing and Research Support (HPC) and more broadly with Information Technology Services (ITS). Research support services provided by the Library include the provision of information resources and discovery services, bibliographic management software, assistance with publishing (publishing strategies, identifying high impact journals, dealing with publishers and the peer review process), citation analysis and calculating authors’ H Index. Research data management services are being developed by the Library and HPC working in collaboration. The HPC group within ITS supports research computing infrastructure, research development and engagement activities, researcher consultation, high speed computation and data storage systems , 2D/ 3D (immersive) visualisation tools, parallelisation and optimization of research codes, statistics/ data modeling training and support (both qualitative and quantitative) and support for the university’s central Access Grid collaboration facility. Development and engagement activities include participation in research grants and papers, student supervision and internships and the sponsorship, incubation and adoption of new computing technologies for research. ITS also provides other services that support research including ICT training, research infrastructure (networking, data storage, federated access and authorization, virtualization) and corporate systems for research administration. Seminars and workshops are offered to increase awareness and uptake of new and existing services. A series of online surveys on eResearch practices and skills and a number of focus groups was conducted to better inform the development of research support services. Progress towards the provision of research support is described within the context organizational frameworks; resourcing; infrastructure; integration; collaboration; change management; engagement; awareness and skills; new services; and leadership. Challenges to be addressed include the need to redeploy existing operational resources toward new research support services, supporting a rapidly growing research profile across the university, the growing need for the use and support of IT in research programs, finding capacity to address the diverse research support needs across the disciplines, operationalising new research support services following their implementation in project mode, embedding new specialist staff roles, cross-skilling Liaison Librarians, and ensuring continued collaboration between stakeholders.
Resumo:
Staff and students of the Surveying and Spatial Sciences discipline at QUT have worked collaboratively with the Institute of Sustainable Resources in the creation and development of spatial information layers and infrastructure to support multi-disciplinary research efforts at the Samford Ecological Research Facility (SERF). The SERF property is unique in that it provides staff and students with a semi-rural controlled research base for multiple users. This paper aims to describe the development of a number of spatial information layers and network of survey monuments that assist and support research infrastructure at SERF. A brief historical background about the facility is presented along with descriptions of the surveying and mapping activities undertaken. These broad ranging activities include introducing monument infrastructure and a geodetic control network; surveying activities for aerial photography ground-control targets including precise levelling with barcode instruments; development of an ortho-rectified image spatial information layer; Real-Time-Kinematic Global Positioning Systems (RTK-GPS) surveying for constructing 100metre confluence points/monuments to support science-based disciplines to undertake environmental research transects and long-term ecological sampling; and real-world learning initiative to assist with water engineering projects and student experiential learning. The spatial information layers and physical infrastructure have been adopted by two specific yet diverse user groups with an interest in the long-term research focus of SERF.
Resumo:
QUT’s new metadata repository (data registry), Research Data Finder, has been designed to promote the visibility and discoverability of QUT research datasets. Funded by the Australian National Data Service (ANDS), it will provide a qualitative snapshot of research data outputs created or collected by members of the QUT research community that are available via open or mediated access. As a fully integrated metadata repository Research Data Finder aligns with institutional sources of truth, such as QUT’s research administrative system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. In addition, the repository and its workflows are designed to foster smoother data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximize existing research datasets. The metadata schema used in Research Data Finder is the Registry Interchange Format - Collections and Services (RIF-CS), developed by ANDS in 2009. This comprehensive schema is potentially complex for researchers; unlike metadata for publications, which are often made publicly available with the official publication, metadata for datasets are not typically available and need to be created. Research Data Finder uses a hybrid self-deposit and mediated deposit system. In addition to automated ingests from ResearchMaster (research project information) and Academic Profiles system (researcher information), shareable data is identified at a number of key “trigger points” in the research cycle. These include: research grant proposals; ethics applications; Data Management Plans; Liaison Librarian data interviews; and thesis submissions. These ingested records can be supplemented with related metadata including links to related publications, such as those in QUT ePrints. Records deposited in Research Data Finder are harvested by ANDS and made available to a national and international audience via Research Data Australia, ANDS’ discovery service for Australian research data. Researcher and research group metadata records are also harvested by the National Library of Australia (NLA) and these records are then published in Trove (the NLA’s digital information portal). By contributing records to the national infrastructure, QUT data will become more visible. Within Australia and internationally, many funding bodies have already mandated the open access of publications produced from publicly funded research projects, such as those supported by the Australian Research Council (ARC), or the National Health and Medical Research Council (NHMRC). QUT will be well placed to respond to the rapidly evolving climate of research data management. This project is supported by the Australian National Data Service (ANDS). ANDS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative.
Resumo:
This chapter analyses the copyright law framework needed to ensure open access to outputs of the Australian academic and research sector such as journal articles and theses. It overviews the new knowledge landscape, the principles of copyright law, the concept of open access to knowledge, the recently developed open content models of copyright licensing and the challenges faced in providing greater access to knowledge and research outputs.
Resumo:
Finite-state methods have been adopted widely in computational morphology and related linguistic applications. To enable efficient development of finite-state based linguistic descriptions, these methods should be a freely available resource for academic language research and the language technology industry. The following needs can be identified: (i) a registry that maps the existing approaches, implementations and descriptions, (ii) managing the incompatibilities of the existing tools, (iii) increasing synergy and complementary functionality of the tools, (iv) persistent availability of the tools used to manipulate the archived descriptions, (v) an archive for free finite-state based tools and linguistic descriptions. Addressing these challenges contributes to building a common research infrastructure for advanced language technology.
Knowledge Exchange study: How Research Tools are of Value to Research: Use Cases and Recommendations
Resumo:
Research tools that are freely available and accessible via the Internet cover an emergent field in the worldwide research infrastructure. Clearly, research tools have increasing value for researchers in their research activities. Knowledge Exchange recently commissioned a project to explore use case studies to show research tools’ potential and relevance for the present research landscape. Makers of successful research tools have been asked questions such as: How are these research tools developed? What are their possibilities? How many researchers use them? What does this new phenomenon mean for the research infrastructure? Additional to the Use Cases, the authors offer observations and recommendations to contribute to effective development of a research infrastructure that can optimally benefit from research tools. the Use Cases are: •Averroes Goes Digital: Transformation, Translation, Transmission and Edition •BRIDGE: Tools for Media Studies Researchers •Multiple Researchers, Single Platform: A Virtual Tool for the 21st Century •The Fabric of Life •Games with A Purpose: How Games Are Turning Image Tagging into Child’s Play •Elmer: Modelling a Future •Molecular Modelling With SOMA2 •An Online Renaissance for Music: Making Early Modern Music Readable •Radio Recordings for Research: How A Million Hours of Danish Broadcasts Were Made Accessible •Salt Rot: A Central Space for Essential Research •Cosmos: Opening Up Social Media for Social Science A brief analysis by the authors can be found: •Some Observations Based on the Case Studies of Research Tools
Resumo:
A substantial amount of the 'critical mass' of digital data available to scholarship contains place-names, and it is now recognised that spatial and temporal data points, including place-names, are a vital part of the e-research infrastructure that supports the use, re-use and advanced analysis of data using ICT tools and methods. Place-names can also be linked semantically to contribute to the web of data, and to enrich content through linking existing data, and identifying new collections for digitization to strategically enhance existing digital collections. However, existing e-projects rely on modern gazetteers limiting them to the modern and the near-contemporary. This workshop explored how to further integrate the wealth of historical place-name scholarship, and the resulting digital resources generated within UK academia, so enabling integration of local knowledge over much longer periods.
Resumo:
This paper presents the TEC4SEA research infrastructure created in Portugal to support research, development, and validation of marine technologies. It is a multidisciplinary open platform, capable of supporting research, development, and test of marine robotics, telecommunications, and sensing technologies for monitoring and operating in the ocean environment. Due to the installed research facilities and its privileged geographic location, it allows fast access to deep sea, and can support multidisciplinary research, enabling full validation and evaluation of technological solutions designed for the ocean environment. It is a vertically integrated infrastructure, in the sense that it possesses a set of skills and resources which range from pure conceptual research to field deployment missions, with strong industrial and logistic capacities in the middle tier of prototype production. TEC4SEA is open to the entire scientific and enterprise community, with a free access policy for researchers affiliated with the research units that ensure its maintenance and sustainability. The paper describes the infrastructure in detail, and discusses associated research programs, providing a strategic vision for deep sea research initiatives, within the context of both the Portuguese National Ocean Strategy and European Strategy frameworks.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June–17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) – the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products – was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.