965 resultados para Metadata
Resumo:
Esitys KDK-käytettävyystyöryhmän järjestämässä seminaarissa: Miten käyttäjien toiveet haastavat metatietokäytäntöjämme? / How users' expectations challenge our metadata practices? 30.9.2014.
Resumo:
Esitys Kansalliskirjaston järjestämässä ARTIVA-hankkeen (2013-2014) päätösseminaarissa 4.2.2015 Helsingissä.
Resumo:
Presented at Access 2014, winner of poster contest.
Resumo:
This article shows the work developed for adapting metadata conform to the official Colombian metadata standard NTC 4611 to the international standard ISO 19115. CatMDedit, an open source metadata editor, is used in this task. CatMDedit is able of import variants of CSDGM such as NTC 4611 and export to the stable version of ISO 19139 (the XML implementation model of ISO 19115)
Resumo:
GeoNetwork opensource is a standards based, Free and Open Source catalog application to manage spatially referenced resources through the web. It is an OSGEO Project initiated by the Food and Agricultural Organization (FAO). The purpose of this presentation is to illustrate the implementation of such a catalog in national projects in France and in Switzerland. Firstly, we will present you the Geosource project undertaken by BRGM (http://www.brgm.fr/), gathering national and local authorities, national geographic survey, public organisations, associations in order to provide a metadata catalog for french users : definition of french iso profile, support for INSPIRE metadata requirements. Finally, we will present the SwissTopo geocat II project. The purpose of the project is to develop the next generation geospatial catalog for SwissTopo on the basis of GeoNetwork opensource. This both projects underline the closely collaboration between national authorities and the Geonetwork opensource community
Resumo:
In this class, we will discuss metadata as well as current phenomena such as tagging and folksonomies. Readings: Ontologies Are Us: A Unified Model of Social Networks and Semantics, P. Mika, International Semantic Web Conference, 522-536, 2005. [Web link] Optional: Folksonomies: power to the people, E. Quintarelli, ISKO Italy-UniMIB Meeting, (2005)
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
With the growing number and significance of urban meteorological networks (UMNs) across the world, it is becoming critical to establish a standard metadata protocol. Indeed, a review of existing UMNs indicate large variations in the quality, quantity, and availability of metadata containing technical information (i.e., equipment, communication methods) and network practices (i.e., quality assurance/quality control and data management procedures). Without such metadata, the utility of UMNs is greatly compromised. There is a need to bring together the currently disparate sets of guidelines to ensure informed and well-documented future deployments. This should significantly improve the quality, and therefore the applicability, of the high-resolution data available from such networks. Here, the first metadata protocol for UMNs is proposed, drawing on current recommendations for urban climate stations and identified best practice in existing networks
Resumo:
We describe the CHARMe project, which aims to link climate datasets with publications, user feedback and other items of "commentary metadata". The system will help users learn from previous community experience and select datasets that best suit their needs, as well as providing direct traceability between conclusions and the data that supported them. The project applies the principles of Linked Data and adopts the Open Annotation standard to record and publish commentary information. CHARMe contributes to the emerging landscape of "climate services", which will provide climate data and information to influence policy and decision-making. Although the project focuses on climate science, the technologies and concepts are very general and could be applied to other fields.
Resumo:
Service discovery in large scale, open distributed systems is difficult because of the need to filter out services suitable to the task at hand from a potentially huge pool of possibilities. Semantic descriptions have been advocated as the key to expressive service discovery, but the most commonly used service descriptions and registry protocols do not support such descriptions in a general manner. In this paper, we present a protocol, its implementation and an API for registering semantic service descriptions and other task/user-specific metadata, and for discovering services according to these. Our approach is based on a mechanism for attaching structured and unstructured metadata, which we show to be applicable to multiple registry technologies. The result is an extremely flexible service registry that can be the basis of a sophisticated semantically-enhanced service discovery engine, an essential component of a Semantic Grid.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.