924 resultados para Dublin Core
Resumo:
The Metadata Provenance Task Group aims to define a data model that allows for making assertions about description sets. Creating a shared model of the data elements required to describe an aggregation of metadata statements allows to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described statements. In this paper we outline the preliminary model created by the task group, together with first examples that demonstrate how the model is to be used.
Resumo:
In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario.
Resumo:
We describe the datos.bne.es library dataset. The dataset makes available the authority and bibliography catalogue from the Biblioteca Nacional de España (BNE, National Library of Spain) as Linked Data. The catalogue contains around 7 million authority and bibliographic records. The records in MARC 21 format were transformed to RDF and modelled using IFLA (International Federation of Library Associations) ontologies and other well-established vocabularies such as RDA (Resource Description and Access) or the Dublin Core Metadata Element Set. A tool named MARiMbA automatized the RDF generation process and the data linkage to DBpedia and other library linked data resources such as VIAF (Virtual International Authority File) or GND (Gemeinsame Normdatei, the authority dataset from the German National Library).
Resumo:
Cultural heritage is a complex and diverse concept, which brings together a wide domain of information. Resources linked to a cultural heritage site may consist of physical artefacts, books, works of art, pictures, historical maps, aerial photographs, archaeological surveys and 3D models. Moreover, all these resources are listed and described by a set of a variety of metadata specifications that allow their online search and consultation on the most basic characteristics of them. Some examples include Norma ISO 19115, Dublin Core, AAT, CDWA, CCO, DACS, MARC, MoReq, MODS, MuseumDat, TGN, SPECTRUM, VRA Core and Z39.50. Gateways are in place to fit in these metadata standards into those used in a SDI (ISO 19115 or INSPIRE), but substantial work still remains to be done for the complete incorporation of cultural heritage information. Therefore, the aim of this paper is to demonstrate how the complexity of cultural heritage resources can be dealt with by a visual exploration of their metadata within a 3D collaborative environment. The 3D collaborative environments are promising tools that represent the new frontier of our capacity of learning, understanding, communicating and transmitting culture.
Resumo:
RESUMEN Las aplicaciones de los Sistemas de Información Geográfica (SIG) a la Arqueología, u otra disciplina humanística no son una novedad. La evolución de los mismos hacia sistemas distribuidos e interoperables, y estructuras donde las políticas de uso, compartido y coordinado de los datos sí lo son, estando todos estos aspectos contemplados en la Infraestructura de Datos Espaciales. INSPIRE es el máximo exponente europeo en cuestiones de iniciativa y marco legal en estos aspectos. La metodología arqueológica recopila y genera gran cantidad de datos, y entre los atributos o características intrínsecas están la posición y el tiempo, aspectos que tradicionalmente explotan los SIG. Los datos se catalogan, organizan, mantienen, comparten y publican, y los potenciales consumidores comienzan a tenerlos disponibles. Toda esta información almacenada de forma tradicional en fichas y posteriormente en bases de datos relacionadas alfanuméricas pueden ser considerados «metadatos» en muchos casos por contener información útil para más usuarios en los procesos de descubrimiento, y explotación de los datos. Además estos datos también suelen ir acompañados de información sobre ellos mismos, que describe su especificaciones, calidad, etc. Cotidianamente usamos los metadatos: ficha bibliográfica del libro o especificaciones de un ordenador. Pudiéndose definir como: «información descriptiva sobre el contexto, calidad, condición y características de un recurso, dato u objeto que tiene la finalidad de facilitar su recuperación, identificación,evaluación, preservación y/o interoperabilidad». En España existe una iniciativa para estandarizar la descripción de los metadatos de los conjuntos de datos geoespaciales: Núcleo Español de Metadatos (NEM), los mismos contienen elementos para la descripción de las particularidades de los datos geográficos, que incluye todos los registros obligatorios de la Norma ISO19115 y del estudio de metadatos Dublin Core, tradicionalmente usado en contextos de Biblioteconomía. Conscientes de la necesidad de los metadatos, para optimizar la búsqueda y recuperación de los datos, se pretende formalizar la documentación de los datos arqueológicos a partir de la utilización del NEM, consiguiendo así la interoperabilidad de la información arqueológica. SUMMARY The application of Geographical Information Systems (GIS) to Archaeology and other social sciences is not new. Their evolution towards inter-operating, distributed systems, and structures in which policies for shared and coordinated data use are, and all these aspects are included in the Spatial Data Infrastructure (SDI). INSPIRE is the main European exponent in matters related to initiative and legal frame. Archaeological methodology gathers and creates a great amount of data, and position and time, aspects traditionally exploited by GIS, are among the attributes or intrinsic characteristics. Data are catalogued, organised, maintained, shared and published, and potential consumers begin to have them at their disposal. All this information, traditionally stored as cards and later in relational alphanumeric databases may be considered «metadata» in many cases, as they contain information that is useful for more users in the processes of discovery and exploitation of data. Moreover, this data are often accompanied by information about themselves, describing its especifications, quality, etc. We use metadata very often: in a book’s bibliographical card, or in the description of the characteristics of a computer. They may be defined as «descriptive information regarding the context, quality, condition and characteristics of a resource, data or object with the purpose of facilitating is recuperation, identification, evaluation, preservation and / interoperability.» There is an initiative in Spain to standardise the description of metadata in sets of geo-spatial data: the Núcleo Español de Metadatos (Spanish Metadata Nucleus), which contains elements for the description of the particular characteristics of geographical data, includes all the obligatory registers from the ISO Norm 19115 and from the metadata study Dublin Core, traditionally used in library management. Being aware of the need of metadata, to optimise the search and retrieval of data, the objective is to formalise the documentation of archaeological data from the Núcleo Español de Metadatos (Spanish Metadata Nucleus), thus obtaining the interoperability of the archaeological information.
Resumo:
Provenance is key for describing the evolution of a resource, the entity responsible for its changes and how these changes affect its final state. A proper description of the provenance of a resource shows who has its attribution and can help resolving whether it can be trusted or not. This tutorial will provide an overview of the W3C PROV data model and its serialization as an OWL ontology. The tutorial will incrementally explain the features of the PROV data model, from the core starting terms to the most complex concepts. Finally, the tutorial will show the relation between PROV-O and the Dublin Core Metadata terms.
Resumo:
Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.
Resumo:
Vorliegende Arbeit beschäftigt sich mit den Auswirkungen von selbst-definierten Extensions auf Kompatibilität von SKOS-Thesauri untereinander. Zu diesem Zweck werden als Grundlage zunächst die Funktionsweisen von RDF, SKOS, SKOS-XL und Dublin Core Metadaten erläutert und die verwendete Syntax geklärt. Es folgt eine Beschreibung des Aufbaus von konventionellen Thesauri inkl. der für sie geltenden Normen. Danach wird der Vorgang der Konvertierung eines konventionellen Thesaurus in SKOS dargestellt. Um dann die selbst-definierten Erweiterungen und ihre Folgen betrachten zu können, werden fünf SKOS-Thesauri beispielhaft beschrieben. Dazu gehören allgemeine Informationen, ihre Struktur, die verwendeten Erweiterungen und ein Schaubild, das die Struktur als Übersicht darstellt. Anhand dieser Thesauri wird dann beschrieben wie Mappings zwischen den Thesauri erstellt werden und welche Herausforderungen dabei bestehen.
Resumo:
The U.S. National Science Foundation metadata registry under development for the National Science Digital Library (NSDL) is a repertory intended to manage both metadata schemes and schemas. The focus of this draft discussion paper is on the scheme side of the development work. In particular, the concern of the discussion paper is with issues around the creation of historical snapshots of concept changes and their encoding in SKOS. Through framing the problem as we see it, we hope to find an optimal solution to our need for a SKOS encoding of these snapshots. Since what we are seeking to model is concept change, it is necessary at the outset to make it clear that we are not talking about changes to a concept of such a nature that would require the declaration a new concept with its own URI.In the project, we avoid the use of the terms “version” and “versioning” with regard to changes in concepts and reserve their use to the significant changes of schemes as a whole. Significant changes triggering a new scheme version might include changes in scheme documentation that express a significant shift in the purpose, use or architecture of the scheme. We use the term “snapshot” to denote the state of a scheme at identifiable points in time. Thus, snapshots are identifiable views of a scheme that record the incremental changes that have occurred to concepts, relationships among concepts, and scheme documentation since the last snapshot. Aspects of concept change occur that we need to capture and make available both through the registry and through potentially in transmission of a scheme to other registries. We call these capturings “concept instances.”
Resumo:
Knowledge organization in the networked environment is guided by standards. Standards in knowledge organization are built on principles. For example, NISO Z39.19-1993 Guide to the Construction of Monolingual Thesauri (now undergoing revision) and NISO Z39.85- 2001 Dublin Core Metadata Element Set are two standards used in many implementations. Both of these standards were crafted with knowledge organization principles in mind. Therefore it is standards work guided by knowledge organization principles which can affect design of information services and technologies. This poster outlines five threads of thought that inform knowledge organization principles in the networked environment. An understanding of each of these five threads informs system evaluation. The evaluation of knowledge organization systems should be tightly linked to a rigorous understanding of the principles of construction. Thus some foundational evaluation questions grow from an understanding of stan dard s and pr inciples: on what pr inciples is this know ledge organization system built? How well does this implementation meet the ideal conceptualization of those principles? How does this tool compare to others built on the same principles?
Resumo:
The paper suggests extensions to SKOS Core to make explicit where concepts in a knowledge organization system have changed from one version of the system to another.
Resumo:
This poster presents the authors’ work to date on developing an application profile for authenticity metadata (the IPAM, or InterPARES Authenticity Metadata), including (1) the functional requirements,(2) metadata elements derived from the Chain of Preservation model from the InterPARES research project, (3) a crosswalk of a sample of IPAM elements to Dublin Core, PREMIS, and MoReq2010,(4) those elements deemed essential to presume the authenticity of a record as it moves from creation to permanent preservation, and (5) next steps, integrating the application profile into the Archivematica preservation system the core elements of the application profile relating to maintaining the presumption of authenticity through preservation and access.
Resumo:
El interés de este Estudio de Caso es identificar y explicar los alcances de la construcción de memoria en la sociedad colombiana, no sólo como elemento de no olvido sino también como herramienta de resistencia y de acción de aquellos que la construyen y reconstruyen. Por ello, la investigación pretende determinar el vínculo existente entre memoria y participación política, analizando y explicando la manera como mujeres pertenecientes a Ruta Pacífica de las Mujeres han construido memoria respecto a los procesos de violencia que enfrentan y la forma como ello ha hecho posible la consolidación de una identidad colectiva por medio de la cual resisten a la violencia y a los códigos culturales que la perpetúan. En consecuencia, se efectúa un análisis a los documentos elaborados por Ruta Pacífica respecto a construcción de memoria y junto con ello, un examen a los procesos participativos que han adelantado las mujeres pertenecientes a Ruta Pacífica.
Resumo:
Esta pesquisa tem como proposta refletir sobre a criação de um modelo de estruturação e catalogação dos metadados para Repositórios Educacionais Abertos. De caráter exploratório e descritivo, a pesquisa utiliza de revisão bibliográfica e documental pertinente para a fundamentação e o tratamento analítico do corpus do trabalho. A pesquisa foi realizada em duas situações complementares: a) o levantamento e a análise dos principais padrões de metadados (MARC 21, Dublin Core, LOM/IEEE e a ISO 19788-2), com o intuito de definir os principais campos descritores aplicáveis para Repositórios Educacionais Abertos; e b) identificação dos campos descritores utilizados no Sistema Universidade Aberta do Brasil. Por meio da análise dos padrões de metadados e da identificação dos campos descritores utilizados foram identificados dois cenários: o primeiro, macro, caracteriza-se pela relação intra e extra-institucional de Repositórios Temáticos e Institucionais; o segundo, micro, baseado nos metadados descritores das unidades didáticas. Verifica-se a interdependência entre o cenário macro e micro, e a necessidade da utilização de mecanismos de padronização e controle. O modelo resultante da análise discute a uniformização no uso de vocabulários controlados para uso no maior número possível de campos, a criação de Conselhos Editoriais Temáticos, o estabelecimento de vínculos de dependência entre objetos de aprendizagem, disciplinas, cursos, autores e instituições, o que torna possível a relação e identificação da origem dos objetos e permite sua contextualização.