992 resultados para Metadata


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Metadata Provenance Task Group aims to define a data model that allows for making assertions about description sets. Creating a shared model of the data elements required to describe an aggregation of metadata statements allows to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described statements. In this paper we outline the preliminary model created by the task group, together with first examples that demonstrate how the model is to be used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aeronautical information plays an essential rolein air safety, chief objective of the aeronautical industry. Community policies and projects are being currently developed for the adequate management of a single European air space. To make this possible, an appropriate information management and a set of tools that allow sharing and exchanging this information, ensuring its interoperability and integrity, are necessary. This paper presents the development and implementation of a metadata profile for description of the aeronautical information based on international regulations and recommendations applied within the geographic scope. The elements taken into account for its development are described, as well as the implementation process and the results obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the challenges facing the EU regarding data retention, particularly in the aftermath of the judgment Digital Rights Ireland by the Court of Justice of the European Union (CJEU) of April 2014, which found the Data Retention Directive 2002/58 to be invalid. It first offers a brief historical account of the Data Retention Directive and then moves to a detailed assessment of what the judgment means for determining the lawfulness of data retention from the perspective of the EU Charter of Fundamental Rights: what is wrong with the Data Retention Directive and how would it need to be changed to comply with the right to respect for privacy? The paper also looks at the responses to the judgment from the European institutions and elsewhere, and presents a set of policy suggestions to the European institutions on the way forward. It is argued here that one of the main issues underlying the Digital Rights Ireland judgment has been the role of fundamental rights in the EU legal order, and in particular the extent to which the retention of metadata for law enforcement purposes is consistent with EU citizens’ right to respect for privacy and to data protection. The paper offers three main recommendations to EU policy-makers: first, to give priority to a full and independent evaluation of the value of the data retention directive; second, to assess the judgment’s implications for other large EU information systems and proposals that provide for the mass collection of metadata from innocent persons, in the EU; and third, to adopt without delay the proposal for Directive COM(2012)10 dealing with data protection in the fields of police and judicial cooperation in criminal matters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present data set provides an Excel file in a zip archive. The file lists 334 samples of size fractionated eukaryotic plankton community with a suite of associated metadata (Database W1). Note that if most samples represented the piconano- (0.8-5 µm, 73 samples), nano- (5-20 µm, 74 samples), micro- (20-180 µm, 70 samples), and meso- (180-2000 µm, 76 samples) planktonic size fractions, some represented different organismal size-fractions: 0.2-3 µm (1 sample), 0.8-20 µm (6 samples), 0.8 µm - infinity (33 samples), and 3-20 µm (1 sample). The table contains the following fields: a unique sample sequence identifier; the sampling station identifier; the Tara Oceans sample identifier (TARA_xxxxxxxxxx); an INDSC accession number allowing to retrieve raw sequence data for the major nucleotide databases (short read archives at EBI, NCBI or DDBJ); the depth of sampling (Subsurface - SUR or Deep Chlorophyll Maximum - DCM); the targeted size range; the sequences template (either DNA or WGA/DNA if DNA extracted from the filters was Whole Genome Amplified); the latitude of the sampling event (decimal degrees); the longitude of the sampling event (decimal degrees); the time and date of the sampling event; the device used to collect the sample; the logsheet event corresponding to the sampling event ; the volume of water sampled (liters). Then follows information on the cleaning bioinformatics pipeline shown on Figure W2 of the supplementary litterature publication: the number of merged pairs present in the raw sequence file; the number of those sequences matching both primers; the number of sequences after quality-check filtering; the number of sequences after chimera removal; and finally the number of sequences after selecting only barcodes present in at least three copies in total and in at least two samples. Finally, are given for each sequence sample: the number of distinct sequences (metabarcodes); the number of OTUs; the average number of barcode per OTU; the Shannon diversity index based on barcodes for each sample (URL of W4 dataset in PANGAEA); and the Shannon diversity index based on each OTU (URL of W5 dataset in PANGAEA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because poor quality semantic metadata can destroy the effectiveness of semantic web technology by hampering applications from producing accurate results, it is important to have frameworks that support their evaluation. However, there is no such framework developedto date. In this context, we proposed i) an evaluation reference model, SemRef, which sketches some fundamental principles for evaluating semantic metadata, and ii) an evaluation framework, SemEval, which provides a set of instruments to support the detection of quality problems and the collection of quality metrics for these problems. A preliminary case study of SemEval shows encouraging results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because metadata that underlies semantic web applications is gathered from distributed and heterogeneous data sources, it is important to ensure its quality (i.e., reduce duplicates, spelling errors, ambiguities). However, current infrastructures that acquire and integrate semantic data have only marginally addressed the issue of metadata quality. In this paper we present our metadata acquisition infrastructure, ASDI, which pays special attention to ensuring that high quality metadata is derived. Central to the architecture of ASDI is a verification engine that relies on several semantic web tools to check the quality of the derived data. We tested our prototype in the context of building a semantic web portal for our lab, KMi. An experimental evaluation comparing the automatically extracted data against manual annotations indicates that the verification engine enhances the quality of the extracted semantic metadata.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.