906 resultados para Metadata schema


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Metadata Provenance Task Group aims to define a data model that allows for making assertions about description sets. Creating a shared model of the data elements required to describe an aggregation of metadata statements allows to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described statements. In this paper we outline the preliminary model created by the task group, together with first examples that demonstrate how the model is to be used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aeronautical information plays an essential rolein air safety, chief objective of the aeronautical industry. Community policies and projects are being currently developed for the adequate management of a single European air space. To make this possible, an appropriate information management and a set of tools that allow sharing and exchanging this information, ensuring its interoperability and integrity, are necessary. This paper presents the development and implementation of a metadata profile for description of the aeronautical information based on international regulations and recommendations applied within the geographic scope. The elements taken into account for its development are described, as well as the implementation process and the results obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the challenges facing the EU regarding data retention, particularly in the aftermath of the judgment Digital Rights Ireland by the Court of Justice of the European Union (CJEU) of April 2014, which found the Data Retention Directive 2002/58 to be invalid. It first offers a brief historical account of the Data Retention Directive and then moves to a detailed assessment of what the judgment means for determining the lawfulness of data retention from the perspective of the EU Charter of Fundamental Rights: what is wrong with the Data Retention Directive and how would it need to be changed to comply with the right to respect for privacy? The paper also looks at the responses to the judgment from the European institutions and elsewhere, and presents a set of policy suggestions to the European institutions on the way forward. It is argued here that one of the main issues underlying the Digital Rights Ireland judgment has been the role of fundamental rights in the EU legal order, and in particular the extent to which the retention of metadata for law enforcement purposes is consistent with EU citizens’ right to respect for privacy and to data protection. The paper offers three main recommendations to EU policy-makers: first, to give priority to a full and independent evaluation of the value of the data retention directive; second, to assess the judgment’s implications for other large EU information systems and proposals that provide for the mass collection of metadata from innocent persons, in the EU; and third, to adopt without delay the proposal for Directive COM(2012)10 dealing with data protection in the fields of police and judicial cooperation in criminal matters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present data set provides an Excel file in a zip archive. The file lists 334 samples of size fractionated eukaryotic plankton community with a suite of associated metadata (Database W1). Note that if most samples represented the piconano- (0.8-5 µm, 73 samples), nano- (5-20 µm, 74 samples), micro- (20-180 µm, 70 samples), and meso- (180-2000 µm, 76 samples) planktonic size fractions, some represented different organismal size-fractions: 0.2-3 µm (1 sample), 0.8-20 µm (6 samples), 0.8 µm - infinity (33 samples), and 3-20 µm (1 sample). The table contains the following fields: a unique sample sequence identifier; the sampling station identifier; the Tara Oceans sample identifier (TARA_xxxxxxxxxx); an INDSC accession number allowing to retrieve raw sequence data for the major nucleotide databases (short read archives at EBI, NCBI or DDBJ); the depth of sampling (Subsurface - SUR or Deep Chlorophyll Maximum - DCM); the targeted size range; the sequences template (either DNA or WGA/DNA if DNA extracted from the filters was Whole Genome Amplified); the latitude of the sampling event (decimal degrees); the longitude of the sampling event (decimal degrees); the time and date of the sampling event; the device used to collect the sample; the logsheet event corresponding to the sampling event ; the volume of water sampled (liters). Then follows information on the cleaning bioinformatics pipeline shown on Figure W2 of the supplementary litterature publication: the number of merged pairs present in the raw sequence file; the number of those sequences matching both primers; the number of sequences after quality-check filtering; the number of sequences after chimera removal; and finally the number of sequences after selecting only barcodes present in at least three copies in total and in at least two samples. Finally, are given for each sequence sample: the number of distinct sequences (metabarcodes); the number of OTUs; the average number of barcode per OTU; the Shannon diversity index based on barcodes for each sample (URL of W4 dataset in PANGAEA); and the Shannon diversity index based on each OTU (URL of W5 dataset in PANGAEA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"... In seiner grundeintheilung entworfen im jahre 1879 von...dr. O. Hartwig."