25 resultados para Metadata

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the growing number and significance of urban meteorological networks (UMNs) across the world, it is becoming critical to establish a standard metadata protocol. Indeed, a review of existing UMNs indicate large variations in the quality, quantity, and availability of metadata containing technical information (i.e., equipment, communication methods) and network practices (i.e., quality assurance/quality control and data management procedures). Without such metadata, the utility of UMNs is greatly compromised. There is a need to bring together the currently disparate sets of guidelines to ensure informed and well-documented future deployments. This should significantly improve the quality, and therefore the applicability, of the high-resolution data available from such networks. Here, the first metadata protocol for UMNs is proposed, drawing on current recommendations for urban climate stations and identified best practice in existing networks

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the CHARMe project, which aims to link climate datasets with publications, user feedback and other items of "commentary metadata". The system will help users learn from previous community experience and select datasets that best suit their needs, as well as providing direct traceability between conclusions and the data that supported them. The project applies the principles of Linked Data and adopts the Open Annotation standard to record and publish commentary information. CHARMe contributes to the emerging landscape of "climate services", which will provide climate data and information to influence policy and decision-making. Although the project focuses on climate science, the technologies and concepts are very general and could be applied to other fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing can potentially provide information useful in improving pollution transport modelling in agricultural catchments. Realisation of this potential will depend on the availability of the raw data, development of information extraction techniques, and the impact of the assimilation of the derived information into models. High spatial resolution hyperspectral imagery of a farm near Hereford, UK is analysed. A technique is described to automatically identify the soil and vegetation endmembers within a field, enabling vegetation fractional cover estimation. The aerially-acquired laser altimetry is used to produce digital elevation models of the site. At the subfield scale the hypothesis that higher resolution topography will make a substantial difference to contaminant transport is tested using the AGricultural Non-Point Source (AGNPS) model. Slope aspect and direction information are extracted from the topography at different resolutions to study the effects on soil erosion, deposition, runoff and nutrient losses. Field-scale models are often used to model drainage water, nitrate and runoff/sediment loss, but the demanding input data requirements make scaling up to catchment level difficult. By determining the input range of spatial variables gathered from EO data, and comparing the response of models to the range of variation measured, the critical model inputs can be identified. Response surfaces to variation in these inputs constrain uncertainty in model predictions and are presented. Although optical earth observation analysis can provide fractional vegetation cover, cloud cover and semi-random weather patterns can hinder data acquisition in Northern Europe. A Spring and Autumn cloud cover analysis is carried out over seven UK sites close to agricultural districts, using historic satellite image metadata, climate modelling and historic ground weather observations. Results are assessed in terms of probability of acquisition probability and implications for future earth observation missions. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper argues for the importance of retaining a map library presence on UK university campuses at a time when many are under threat of closure, and access to geospatial data is increasingly moving to web-based services. It is suggested that the need for local expertise is undiminished and map curators need to redefine themselves as geoinformation specialists, preserving their paper map collections, but also meeting some of the challenges of GIS, and contributing to national developments in the construction of distributed geolibraries and the provision of metadata, especially with regard to local data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A self study course for learning to program using the C programming language has been developed. A Learning Object approach was used in the design of the course. One of the benefits of the Learning Object approach is that the learning material can be reused for different purposes. 'Me course developed is designed so that learners can choose the pedagogical approach most suited to their personal learning requirements. For all learning approaches a set of common Assessment Learning Objects (ALOs or tests) have been created. The design of formative assessments with ALOs can be carried out by the Instructional Designer grouping ALOs to correspond to a specific assessment intention. The course is non-credit earning, so there is no summative assessment, all assessment is formative. In this paper examples of ALOs and their uses is presented together with their uses as decided by the Instructional Designer and learner. Personalisation of the formative assessment of skills can be decided by the Instructional Designer or the learner using a repository of pre-designed ALOs. The process of combining ALOs can be carried out manually or in a semi-automated way using metadata that describes the ALO and the skill it is designed to assess.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the on-going research performed in order to integrate process automation and process management support in the context of media production. This has been addressed on the basis of a holistic approach to software engineering applied to media production modelling to ensure design correctness, completeness and effectiveness. The focus of the research and development has been to enhance the metadata management throughout the process in a similar fashion to that achieved in Decision Support Systems (DSS) to facilitate well-grounded business decisions. The paper sets out the aims and objectives and the methodology deployed. The paper describes the solution in some detail and sets out some preliminary conclusions and the planned future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The iRODS system, created by the San Diego Supercomputing Centre, is a rule oriented data management system that allows the user to create sets of rules to define how the data is to be managed. Each rule corresponds to a particular action or operation (such as checksumming a file) and the system is flexible enough to allow the user to create new rules for new types of operations. The iRODS system can interface to any storage system (provided an iRODS driver is built for that system) and relies on its’ metadata catalogue to provide a virtual file-system that can handle files of any size and type. However, some storage systems (such as tape systems) do not handle small files efficiently and prefer small files to be packaged up (or “bundled”) into larger units. We have developed a system that can bundle small data files of any type into larger units - mounted collections. The system can create collection families and contains its’ own extensible metadata, including metadata on which family the collection belongs to. The mounted collection system can work standalone and is being incorporated into the iRODS system to enhance the systems flexibility to handle small files. In this paper we describe the motivation for creating a mounted collection system, its’ architecture and how it has been incorporated into the iRODS system. We describe different technologies used to create the mounted collection system and provide some performance numbers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a prototype grid infrastructure, called the eMinerals minigrid, for molecular simulation scientists. which is based on an integration of shared compute and data resources. We describe the key components, namely the use of Condor pools, Linux/Unix clusters with PBS and IBM's LoadLeveller job handling tools, the use of Globus for security handling, the use of Condor-G tools for wrapping globus job submit commands, Condor's DAGman tool for handling workflow, the Storage Resource Broker for handling data, and the CCLRC dataportal and associated tools for both archiving data with metadata and making data available to other workers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is remarkable agreement in expectations today for vastly improved ocean data management a decade from now -- capabilities that will help to bring significant benefits to ocean research and to society. Advancing data management to such a degree, however, will require cultural and policy changes that are slow to effect. The technological foundations upon which data management systems are built are certain to continue advancing rapidly in parallel. These considerations argue for adopting attitudes of pragmatism and realism when planning data management strategies. In this paper we adopt those attitudes as we outline opportunities for progress in ocean data management. We begin with a synopsis of expectations for integrated ocean data management a decade from now. We discuss factors that should be considered by those evaluating candidate “standards”. We highlight challenges and opportunities in a number of technical areas, including “Web 2.0” applications, data modeling, data discovery and metadata, real-time operational data, archival of data, biological data management and satellite data management. We discuss the importance of investments in the development of software toolkits to accelerate progress. We conclude the paper by recommending a few specific, short term targets for implementation, that we believe to be both significant and achievable, and calling for action by community leadership to effect these advancements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.