986 resultados para Metadata repository


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reusing Learning Objects saves time and reduce development costs. Hence, achieving their interoperability in multiple contexts is essential when creating a Learning Object Repository. On the other hand, novel web videoconference services are available due to technological advancements. Several benefits can be gained by integrating Learning Objects into these services. For instance, they can allow sharing, co-viewing and synchronized co-browsing of these resources at the same time that provide real time communication. However, several efforts need to be undertaken to achieve the interoperability with these systems. In this paper, we propose a model to integrate the resources of the Learning Object Repositories into web videoconference services. The experience of applying this model in a real e-Learning scenario achieving interoperability with two different web videoconference services is also described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reusing Learning Objects saves time and reduce development costs. Hence, achieving their interoperability in multiple contexts is essential when creating a Learning Object Repository. On the other hand, novel web videoconference services are available due to technological advancements. Several benefits can be gained by integrating Learning Objects into these services. For instance, they can allow sharing, co-viewing and synchronized co-browsing of these resources at the same time that provide real time communication. However, several efforts need to be undertaken to achieve the interoperability with these systems. In this paper, we propose a model to integrate the resources of the Learning Object Repositories into web videoconference services. The experience of applying this model in a real e-Learning scenario achieving interoperability with two different web videoconference services is also described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although a vast amount of life sciences data is generated in the form of images, most scientists still store images on extremely diverse and often incompatible storage media, without any type of metadata structure, and thus with no standard facility with which to conduct searches or analyses. Here we present a solution to unlock the value of scientific images. The Global Image Database (GID) is a web-based (http://www.g wer.ch/qv/gid/gid.htm) structured central repository for scientific annotated images. The GID was designed to manage images from a wide spectrum of imaging domains ranging from microscopy to automated screening. The annotations in the GID define the source experiment of the images by describing who the authors of the experiment are, when the images were created, the biological origin of the experimental sample and how the sample was processed for visualization. A collection of experimental imaging protocols provides details of the sample preparation, and labeling, or visualization procedures. In addition, the entries in the GID reference these imaging protocols with the probe sequences or antibody names used in labeling experiments. The GID annotations are searchable by field or globally. The query results are first shown as image thumbnail previews, enabling quick browsing prior to original-sized annotated image retrieval. The development of the GID continues, aiming at facilitating the management and exchange of image data in the scientific community, and at creating new query tools for mining image data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los modelos ecológicos se han convertido en una pieza clave de esta ciencia. La generación de conocimiento se consigue en buena medida mediante procesos analíticos más o menos complejos aplicados sobre conjuntos de datos diversos. Pero buena parte del conocimiento necesario para diseñar e implementar esos modelos no está accesible a la comunidad científica. Proponemos la creación de herramientas informáticas para documentar, almacenar y ejecutar modelos ecológicos y flujos de trabajo. Estas herramientas (repositorios de modelos) están siendo desarrolladas por otras disciplinas como la biología molecular o las ciencias de la Tierra. Presentamos un repositorio de modelos (ModeleR) desarrollado en el contexto del Observatorio de seguimiento del cambio global de Sierra Nevada (Granada-Almería). Creemos que los repositorios de modelos fomentarán la cooperación entre científicos, mejorando la creación de conocimiento relevante que podría ser transferido a los tomadores de decisiones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the challenges facing the EU regarding data retention, particularly in the aftermath of the judgment Digital Rights Ireland by the Court of Justice of the European Union (CJEU) of April 2014, which found the Data Retention Directive 2002/58 to be invalid. It first offers a brief historical account of the Data Retention Directive and then moves to a detailed assessment of what the judgment means for determining the lawfulness of data retention from the perspective of the EU Charter of Fundamental Rights: what is wrong with the Data Retention Directive and how would it need to be changed to comply with the right to respect for privacy? The paper also looks at the responses to the judgment from the European institutions and elsewhere, and presents a set of policy suggestions to the European institutions on the way forward. It is argued here that one of the main issues underlying the Digital Rights Ireland judgment has been the role of fundamental rights in the EU legal order, and in particular the extent to which the retention of metadata for law enforcement purposes is consistent with EU citizens’ right to respect for privacy and to data protection. The paper offers three main recommendations to EU policy-makers: first, to give priority to a full and independent evaluation of the value of the data retention directive; second, to assess the judgment’s implications for other large EU information systems and proposals that provide for the mass collection of metadata from innocent persons, in the EU; and third, to adopt without delay the proposal for Directive COM(2012)10 dealing with data protection in the fields of police and judicial cooperation in criminal matters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present data set provides an Excel file in a zip archive. The file lists 334 samples of size fractionated eukaryotic plankton community with a suite of associated metadata (Database W1). Note that if most samples represented the piconano- (0.8-5 µm, 73 samples), nano- (5-20 µm, 74 samples), micro- (20-180 µm, 70 samples), and meso- (180-2000 µm, 76 samples) planktonic size fractions, some represented different organismal size-fractions: 0.2-3 µm (1 sample), 0.8-20 µm (6 samples), 0.8 µm - infinity (33 samples), and 3-20 µm (1 sample). The table contains the following fields: a unique sample sequence identifier; the sampling station identifier; the Tara Oceans sample identifier (TARA_xxxxxxxxxx); an INDSC accession number allowing to retrieve raw sequence data for the major nucleotide databases (short read archives at EBI, NCBI or DDBJ); the depth of sampling (Subsurface - SUR or Deep Chlorophyll Maximum - DCM); the targeted size range; the sequences template (either DNA or WGA/DNA if DNA extracted from the filters was Whole Genome Amplified); the latitude of the sampling event (decimal degrees); the longitude of the sampling event (decimal degrees); the time and date of the sampling event; the device used to collect the sample; the logsheet event corresponding to the sampling event ; the volume of water sampled (liters). Then follows information on the cleaning bioinformatics pipeline shown on Figure W2 of the supplementary litterature publication: the number of merged pairs present in the raw sequence file; the number of those sequences matching both primers; the number of sequences after quality-check filtering; the number of sequences after chimera removal; and finally the number of sequences after selecting only barcodes present in at least three copies in total and in at least two samples. Finally, are given for each sequence sample: the number of distinct sequences (metabarcodes); the number of OTUs; the average number of barcode per OTU; the Shannon diversity index based on barcodes for each sample (URL of W4 dataset in PANGAEA); and the Shannon diversity index based on each OTU (URL of W5 dataset in PANGAEA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For 3-4 voices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Published: New York : Leavitt, Trow and Co., 1845- ; The Proprietors, -1850.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Issued in 1889 under title "The little giant cyclopedia", and in 1893 under title "The marvel cyclopedia". Also issued under titles "The nutshell cyclopedia" and "Armstrong's treasury of ready reference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attributed to Henry C. Blinn. Cf. MacLean, J.P. Shaker lit.