21 resultados para Data access
em CentAUR: Central Archive University of Reading - UK
Resumo:
The CGIAR System conducts research to produce international public goods (IPG) that are of wide applicability creating a scientific base which speeds and broadens local adaptive development. Integrated natural resources management (INRM) research is sometimes seen to be very location specific and consequently does not lend itself readily to the production of IPGs. In this paper we analyse ways in which strategic approaches to INRM research can have broad international applicability and serve as useful foundations for the development of locally adapted technologies. The paper describes the evolution of the IPG concept within the CGIAR and elaborates on five major types of IPGs that have been generated from a varied set of recent INRM research efforts. CGIAR networks have both strengths and weaknesses in INRM research and application, with enormous differences in relative research and development capacities, responsibilities and data access of its partners, making programme process evolution critical to acceptance and participation. Many of the lessons learnt regarding challenges and corresponding IPG research approaches are relevant to designing and managing future multi-scale, multi-locational, coordinated INRM programmes involving broad-based partnerships to address complex environmental and livelihood problems for development.
Resumo:
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Resumo:
Routine milk recording data, often covering many years, are available for approximately half the dairy herds of England and Wales. In addition to milk yield and quality, these data include production events that can be used to derive objective Key Performance Indicators (KPI) describing a herd's fertility and production. Recent developments in information systems give veterinarians and other technical advisers access to these KPIs on-line. In addition to reviewing individual herd performance, advisers can establish local benchmark groups to demonstrate the relative performance of similar herds in the vicinity. The use of existing milk recording data places no additional demands on farmer's time or resources. These developments could also readily be exploited by universities to introduce veterinary undergraduates to the realities of commercial dairy production.
Resumo:
In this paper we propose an enhanced relay-enabled distributed coordination function (rDCF) for wireless ad hoc networks. The idea of rDCF is to use high data rate nodes to work as relays for the low data rate nodes. The relay helps to increase the throughput and lower overall blocking time of nodes due to faster dual-hop transmission. rDCF achieves higher throughput over IEEE 802.11 distributed coordination function (DCF). The protocol is further enhanced for higher throughput and reduced energy. These enhancements result from the use of a dynamic preamble (i.e. using short preamble for the relay transmission) and also by reducing unnecessary overhearing (by other nodes not involved in transmission). We have modeled the energy consumption of rDCF, showing that rDCF provides an energy efficiency of 21.7% at 50 nodes over 802.11 DCF. Compared with the existing rDCF, the enhanced rDCF (ErDCF) scheme proposed in this paper yields a throughput improvement of 16.54% (at the packet length of 1000 bytes) and an energy saving of 53% at 50 nodes.
Resumo:
The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.
Resumo:
In the decade since OceanObs `99, great advances have been made in the field of ocean data dissemination. The use of Internet technologies has transformed the landscape: users can now find, evaluate and access data rapidly and securely using only a web browser. This paper describes the current state of the art in dissemination methods for ocean data, focussing particularly on ocean observations from in situ and remote sensing platforms. We discuss current efforts being made to improve the consistency of delivered data and to increase the potential for automated integration of diverse datasets. An important recent development is the adoption of open standards from the Geographic Information Systems community; we discuss the current impact of these new technologies and their future potential. We conclude that new approaches will indeed be necessary to exchange data more effectively and forge links between communities, but these approaches must be evaluated critically through practical tests, and existing ocean data exchange technologies must be used to their best advantage. Investment in key technology components, cross-community pilot projects and the enhancement of end-user software tools will be required in order to assess and demonstrate the value of any new technology.
Resumo:
Researchers often experience difficulties with the negotiation of access into firms for the purpose of data collection. The question we explore is: What are the main obstacles associated with access negotiation into firms; and what strategies do researchers employ to increase their chances of success? Our research work on the tendering process of contractors took place between 2006 and 2008. We successfully negotiated access into four firms (two each in Ghana and the UK) to observe live examples of tender preparation The techniques we employed in negotiating access were personal contacts, contacting firms through online details and professional institutions, etc. With all of this effort, our average success rate was less than 5 per cent. The main obstacles encountered were firms’ reluctance because of commercial sensitiveness and fear that the data could eventually be divulged to their competitors or end up in the public domain. However, some firms agreed mainly because of the written assurances of confidentiality and anonymity in reporting the study; reputation of the researchers’ academic institution; gatekeepers who spoke to their colleagues on our behalf; academic purpose of the study; and a feedback report which was promised in return for access to the case studies. Although the access through personal contacts is by far the easiest, it is not always possible. Researchers can approach firms as complete strangers, especially in a foreign country, and that could make the firms more likely to assist the research.
Resumo:
Climate-G is a large scale distributed testbed devoted to climate change research. It is an unfunded effort started in 2008 and involving a wide community both in Europe and US. The testbed is an interdisciplinary effort involving partners from several institutions and joining expertise in the field of climate change and computational science. Its main goal is to allow scientists carrying out geographical and cross-institutional data discovery, access, analysis, visualization and sharing of climate data. It represents an attempt to address, in a real environment, challenging data and metadata management issues. This paper presents a complete overview about the Climate-G testbed highlighting the most important results that have been achieved since the beginning of this project.
Resumo:
This paper reports the findings from two large scale national on-line surveys carried out in 2009 and 2010, which explored the state of history teaching in English secondary schools. Large variation in provision was identified within comprehensive schools in response to national policy decisions and initiatives. Using the data from the surveys and school level data that is publicly available, this study examines situated factors, particularly the nature of the school intake, the numbers of pupils with special educational needs and the socio-economic status of the area surrounding the school, and the impact these have on the provision of history education. The findings show that there is a growing divide between those students that have access to the ‘powerful knowledge’, provided by subjects like history, and those that do not.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
This Editorial presents the focus, scope and policies of the inaugural issue of Nature Conservation, a new open access, peer-reviewed journal bridging natural sciences, social sciences and hands-on applications in conservation management. The journal covers all aspects of nature conservation and aims particularly at facilitating better interaction between scientists and practitioners. The journal will impose no restrictions on manuscript size or the use of colour. We will use an XML-based editorial workflow and several cutting-edge innovations in publishing and information dissemination. These include semantic mark-up of, and enhancements to published text, data, and extensive cross-linking within the journal and to external sources. We believe the journal will make an important contribution to better linking science and practice, offers rapid, peer-reviewed and flexible publication for authors and unrestricted access to content.