876 resultados para Metadata standards
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
(Document pdf contains 193 pages) Executive Summary (pdf, < 0.1 Mb) 1. Introduction (pdf, 0.2 Mb) 1.1 Data sharing, international boundaries and large marine ecosystems 2. Objectives (pdf, 0.3 Mb) 3. Background (pdf, < 0.1 Mb) 3.1 North Pacific Ecosystem Metadatabase 3.2 First federation effort: NPEM and the Korea Oceanographic Data Center 3.2 Continuing effort: Adding Japan’s Marine Information Research Center 4. Metadata Standards (pdf, < 0.1 Mb) 4.1 Directory Interchange Format 4.2 Ecological Metadata Language 4.3 Dublin Core 4.3.1. Elements of DC 4.4 Federal Geographic Data Committee 4.5 The ISO 19115 Metadata Standard 4.6 Metadata stylesheets 4.7 Crosswalks 4.8 Tools for creating metadata 5. Communication Protocols (pdf, < 0.1 Mb) 5.1 Z39.50 5.1.1. What does Z39.50 do? 5.1.2. Isite 6. Clearinghouses (pdf, < 0.1 Mb) 7. Methodology (pdf, 0.2 Mb) 7.1 FGDC metadata 7.1.1. Main sections 7.1.2. Supporting sections 7.1.3. Metadata validation 7.2 Getting a copy of Isite 7.3 NSDI Clearinghouse 8. Server Configuration and Technical Issues (pdf, 0.4 Mb) 8.1 Hardware recommendations 8.2 Operating system – Red Hat Linux Fedora 8.3 Web services – Apache HTTP Server version 2.2.3 8.4 Create and validate FGDC-compliant Metadata in XML format 8.5 Obtaining, installing and configuring Isite for UNIX/Linux 8.5.1. Download the appropriate Isite software 8.5.2. Untar the file 8.5.3. Name your database 8.5.4. The zserver.ini file 8.5.5. The sapi.ini file 8.5.6. Indexing metadata 8.5.7. Start the Clearinghouse Server process 8.5.8. Testing the zserver installation 8.6 Registering with NSDI Clearinghouse 8.7 Security issues 9. Search Tutorial and Examples (pdf, 1 Mb) 9.1 Legacy NSDI Clearinghouse search interface 9.2 New GeoNetwork search interface 10. Challenges (pdf, < 0.1 Mb) 11. Emerging Standards (pdf, < 0.1 Mb) 12. Future Activity (pdf, < 0.1 Mb) 13. Acknowledgments (pdf, < 0.1 Mb) 14. References (pdf, < 0.1 Mb) 15. Acronyms (pdf, < 0.1 Mb) 16. Appendices 16.1. KODC-NPEM meeting agendas and minutes (pdf, < 0.1 Mb) 16.1.1. Seattle meeting agenda, August 22–23, 2005 16.1.2. Seattle meeting minutes, August 22–23, 2005 16.1.3. Busan meeting agenda, October 10–11, 2005 16.1.4. Busan meeting minutes, October 10–11, 2005 16.2. MIRC-NPEM meeting agendas and minutes (pdf, < 0.1 Mb) 16.2.1. Seattle Meeting agenda, August 14-15, 2006 16.2.2. Seattle meeting minutes, August 14–15, 2006 16.2.3. Tokyo meeting agenda, October 19–20, 2006 16.2.4. Tokyo, meeting minutes, October 19–20, 2006 16.3. XML stylesheet conversion crosswalks (pdf, < 0.1 Mb) 16.3.1. FGDCI to DIF stylesheet converter 16.3.2. DIF to FGDCI stylesheet converter 16.3.3. String-modified stylesheet 16.4. FGDC Metadata Standard (pdf, 0.1 Mb) 16.4.1. Overall structure 16.4.2. Section 1: Identification information 16.4.3. Section 2: Data quality information 16.4.4. Section 3: Spatial data organization information 16.4.5. Section 4: Spatial reference information 16.4.6. Section 5: Entity and attribute information 16.4.7. Section 6: Distribution information 16.4.8. Section 7: Metadata reference information 16.4.9. Sections 8, 9 and 10: Citation information, time period information, and contact information 16.5. Images of the Isite server directory structure and the files contained in each subdirectory after Isite installation (pdf, 0.2 Mb) 16.6 Listing of NPEM’s Isite configuration files (pdf, < 0.1 Mb) 16.6.1. zserver.ini 16.6.2. sapi.ini 16.7 Java program to extract records from the NPEM metadatabase and write one XML file for each record (pdf, < 0.1 Mb) 16.8 Java program to execute the metadata extraction program (pdf, < 0.1 Mb) A1 Addendum 1: Instructions for Isite for Windows (pdf, 0.6 Mb) A2 Addendum 2: Instructions for Isite for Windows ADHOST (pdf, 0.3 Mb)
Resumo:
Introduction: Authority records interchange requires establishing and using metadata standards, such as MARC 21 Format for Authority Data, format used by several cataloging agencies, and Metadata Authority Description Schema (MADS), that has received little attention and it is a little widespread standard among agencies. Purpose: Presenting an introductory study about Metadata Authority Description Schema (MADS). Methodology: Descriptive and exploratory bibliographic research. Results: The paper address the MADS creation context, its goals and its structure and key issues related to conversion of records from MARC 21 to MADS. Conclusions: The study concludes that, despite its limitations, MADS might be used to create simple authority records in Web environment and beyond libraries context.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
Aponta aspectos que devem ser considerados na identificação de metadados de assunto granulares para a legislação federal brasileira. O objeto de estudo foi o Sistema de Legislação Informatizada (Legin Web) disponível no Portal da Câmara dos Deputados. Os objetivos específicos foram: identificar tipos de assuntos amplamente utilizados na indexação da legislação federal brasileira e aspectos do contexto de busca de informação que interferissem na identificação dos metadados de assunto; analisar possibilidades de metadados de assunto para a legislação federal com base em padrões de metadados e modelos de organização da informação abordados na literatura; e, com isso, propor metadados de assunto para a legislação federal brasileira. A ideia é usar esses metadados para diminuir a imprecisão dos resultados das pesquisas na legislação federal, tornando o processo mais rápido e eficiente.
Resumo:
Anticipating the increase in video information in future, archiving of news is an important activity in the visual media industry. When the volume of archives increases, it will be difficult for journalists to find the appropriate content using current search tools. This paper provides the details of the study we conducted about the news extraction systems used in different news channels in Kerala. Semantic web technologies can be used effectively since news archiving share many of the characteristics and problems of WWW. Since visual news archives of different media resources follow different metadata standards, interoperability between the resources is also an issue. World Wide Web Consortium has proposed a draft for an ontology framework for media resource which addresses the intercompatiblity issues. In this paper, the w3c proposed framework and its drawbacks is also discussed
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
O Repositório Institucional UNESP foi criado em 2013 e, para sua implantação, foi povoado com dados obtidos de forma automática. Considerando a experiência realizada na UNESP, este trabalho tem por objetivo apresentar os processos utilizados para a conversão dos registros coletados de três diferentes fontes de dados (Web of Science, SciELO e Scopus) para inclusão no repositório. A partir da coleta dos registros, os padrões de metadados da Web of Science, da SciELO e da Scopus foram mapeados para o perfil de aplicação de metadados utilizado no repositório. Os registros foram coletados como arquivos XML e, para sua conversão, foram elaboradas folhas de estilo utilizando a linguagem XSLT. Após essa conversão, os arquivos XML foram convertidos em arquivos CSV e, então, importados no Repositório. Conclui-se que os processos de conversão utilizados permitiram alcançar as metas iniciais do Repositório e evitaram a necessidade de inclusão dos registros de forma manual.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)