879 resultados para Standards de metadata


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Metadata is data that fully describes the data and the areas they represent, allowing the user to decide on their use as best as possible. Allow reporting on the existence of a set of data linked to specific needs. The use of metadata has the purpose of documenting and organizing a structured organizational data in order to minimize duplication of efforts to locate them and to facilitate maintenance. It also provides the administration of large amounts of data, discovery, retrieval and editing features. The global use of metadata is regulated by a technical group or task force composed of several segments such as industries, universities and research firms. Agriculture in particular is a good example for the development of typical applications using metadata is the integration of systems and equipment, allowing the implementation of techniques used in precision agriculture, the integration of different computer systems via webservices or other type of solution requires the integration of structured data. The purpose of this paper is to present an overview of the standards of metadata areas consolidated as agricultural.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Web tornou-se uma ferramenta indispensável para a sociedade moderna. A capacidade de aceder a enormes quantidades de informação, disponível em praticamente todo o mundo, é uma grande vantagem para as nossas vidas. No entanto, a quantidade avassaladora de informação disponível torna-se um problema, que é o de encontrar a informação que precisamos no meio de muita informação irrelevante. Para nos ajudar nesta tarefa, foram criados poderosos motores de pesquisa online, que esquadrinham a Web à procura dos melhores resultados, segundo os seus critérios, para os dados que precisamos. Actualmente, os motores de pesquisa em voga, usam um formato de apresentação de resultados simples, que consiste apenas numa caixa de texto para o utilizador inserir as palavras-chave sobre o tema que quer pesquisar e os resultados são dispostos sobre uma lista de hiperligações ordenada pela relevância que o motor atribui a cada resultado. Porém, existem outras formas de apresentar resultados. Uma das alternativas é apresentar os resultados sobre interfaces em 3 dimensões. É nestes tipos de sistemas que este trabalho vai incidir, os motores de pesquisa com interfaces em 3 dimensões. O problema é que as páginas Web não estão preparadas para serem consumidas por este tipo de motores de pesquisa. Para resolver este problema foi construído um modelo generalista para páginas Web, que consegue alimentar os requisitos das diversas variantes destes motores de pesquisa. Foi também desenvolvido um protótipo de instanciação automático, que recolhe as informações necessárias das páginas Web e preenche o modelo.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aeronautical information plays an essential rolein air safety, chief objective of the aeronautical industry. Community policies and projects are being currently developed for the adequate management of a single European air space. To make this possible, an appropriate information management and a set of tools that allow sharing and exchanging this information, ensuring its interoperability and integrity, are necessary. This paper presents the development and implementation of a metadata profile for description of the aeronautical information based on international regulations and recommendations applied within the geographic scope. The elements taken into account for its development are described, as well as the implementation process and the results obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The content of a Learning Object is frequently characterized by metadata from several standards, such as LOM, SCORM and QTI. Specialized domains require new application profiles that further complicate the task of editing the metadata of learning object since their data models are not supported by existing authoring tools. To cope with this problem we designed a metadata editor supporting multiple metadata languages, each with its own data model. It is assumed that the supported languages have an XML binding and we use RDF to create a common metadata representation, independent from the syntax of each metadata languages. The combined data model supported by the editor is defined as an ontology. Thus, the process of extending the editor to support a new metadata language is twofold: firstly, the conversion from the XML binding of the metadata language to RDF and vice-versa; secondly, the extension of the ontology to cover the new metadata model. In this paper we describe the general architecture of the editor, we explain how a typical metadata language for learning objects is represented as an ontology, and how this formalization captures all the data required to generate the graphical user interface of the editor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GeoNetwork opensource is a standards based, Free and Open Source catalog application to manage spatially referenced resources through the web. It is an OSGEO Project initiated by the Food and Agricultural Organization (FAO). The purpose of this presentation is to illustrate the implementation of such a catalog in national projects in France and in Switzerland. Firstly, we will present you the Geosource project undertaken by BRGM (http://www.brgm.fr/), gathering national and local authorities, national geographic survey, public organisations, associations in order to provide a metadata catalog for french users : definition of french iso profile, support for INSPIRE metadata requirements. Finally, we will present the SwissTopo geocat II project. The purpose of the project is to develop the next generation geospatial catalog for SwissTopo on the basis of GeoNetwork opensource. This both projects underline the closely collaboration between national authorities and the Geonetwork opensource community

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the popularity of the Web encourages the development of Hypermedia Systems dedicated to e-learning. Nevertheless, most of the available Web teaching systems apply the traditional paper-based learning resources presented as HTML pages making no use of the new capabilities provided by the Web. There is a challenge to develop educative systems that adapt the educative content to the style of learning, context and background of each student. Another research issue is the capacity to interoperate on the Web reusing learning objects. This work presents an approach to address these two issues by using the technologies of the Semantic Web. The approach presented here models the knowledge of the educative content and the learner’s profile with ontologies whose vocabularies are a refinement of those defined on standards situated on the Web as reference points to provide semantics. Ontologies enable the representation of metadata concerning simple learning objects and the rules that define the way that they can feasibly be assembled to configure more complex ones. These complex learning objects could be created dynamically according to the learners’ profile by intelligent agents that use the ontologies as the source of their beliefs. Interoperability issues were addressed by using an application profile of the IEEE LOM- Learning Object Metadata standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Authority records interchange requires establishing and using metadata standards, such as MARC 21 Format for Authority Data, format used by several cataloging agencies, and Metadata Authority Description Schema (MADS), that has received little attention and it is a little widespread standard among agencies. Purpose: Presenting an introductory study about Metadata Authority Description Schema (MADS). Methodology: Descriptive and exploratory bibliographic research. Results: The paper address the MADS creation context, its goals and its structure and key issues related to conversion of records from MARC 21 to MADS. Conclusions: The study concludes that, despite its limitations, MADS might be used to create simple authority records in Web environment and beyond libraries context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the European Union, member states are setting up official data catalogues as entry points to access PSI (Public Sector Information). In this context, it is important to describe the metadata of these data portals, i.e., of data catalogs, and allow for interoperability among them. To tackle these issues, the Government Linked Data Working Group developed DCAT (Data Catalog Vocabulary), an RDF vocabulary for describing the metadata of data catalogs. This topic report analyzes the current use of the DCAT vocabulary in several European data catalogs and proposes some recommendations to deal with an inconsistent use of the metadata across countries. The enrichment of such metadata vocabularies with multilingual descriptions, as well as an account for cultural divergences, is seen as a necessary step to guarantee interoperability and ensure wider adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to review some of the standards, connected with multimedia and their metadata. We start with MPEG family. MPEG-21 provides an open framework for multimedia delivery and consumption. MPEG- 7 is a multimedia content description standard. With the Internet grow several format were proposed for media scenes description. Some of them are open standards such as: VRML1, X3D2, SMIL3, SVG4, MPEG-4 BIFS, MPEG-4, XMT, MPEG-4, LaSER, COLLADA5, published by ISO, W3C, etc. Television has become the most important mass medium. Standards such as MHEG, DAVIC, Java TV, MHP, GEM, OCAP and ACAP have been developed. Efficient video-streaming is presented. There exist a large number of standards for representing audiovisual metadata. We cover the Material Exchange Format (MXF), the Digital Picture Exchange (DPX), and the Digital Cinema Package (DCP).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This panel presentation provided several use cases that detail the complexity of large-scale digital library system (DLS) migration from the perspective of three university libraries and a statewide academic library services consortium. Each described the methodologies developed at the beginning of their migration process, the unique challenges that arose along the way, how issues were managed, and the outcomes of their work. Florida Atlantic University, Florida International University, and the University of Central Florida are members of the state's academic library services consortium, the Florida Virtual Campus (FLVC). In 2011, the Digital Services Committee members began exploring alternatives to DigiTool, their shared FLVC hosted DLS. After completing a review of functional requirements and existing systems, the universities and FLVC began the implementation process of their chosen platforms. Migrations began in 2013 with limited sets of materials. As functionalities were enhanced to support additional categories of materials from the legacy system, migration paths were created for the remaining materials. Some of the challenges experienced with the institutional and statewide collaborative legacy collections were due to gradual changes in standards, technology, policies, and personnel. This was manifested in the quality of original digital files and metadata, as well as collection and record structures. Additionally, the complexities involved with multiple institutions collaborating and compromising throughout the migration process, as well as the move from a consortial support structure with a vendor solution to open source systems (both locally and consortially supported), presented their own sets of unique challenges. Following the presentation, the speakers discussed commonalities in their migration experience, including learning opportunities for future migrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.