6 resultados para Semantic Publishing, Linked Data, Bibliometrics, Informetrics, Data Retrieval, Citations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La herramienta pretende integrar datos de la UPV/EHU públicamente accesibles y establecer puentes entre distintos repositorios de información del ámbito académico. Para ello, aprovecharemos el Linked Data, "la forma que tiene la Web Semántica de vincular los distintos datos que están distribuidos en la Web", y los estándares que éste define. La herramienta pretende integrar datos de la UPV/EHU públicamente accesibles y establecer puentes entre distintos repositorios de información del ámbito académico. Para ello, aprovecharemos el Linked Data, "la forma que tiene la Web Semántica de vincular los distintos datos que están distribuidos en la Web", y los estándares que éste define. Los repositorios elegidos para este trabajo han sido el ADDI, Bilatu, las páginas de todos los centros de la UPV/EHU en el Campus de Gipuzkoa y la DBLP. La mayoría de las funcionalidades de esta aplicación son genéricas, por lo que podrían fácilmente aplicarse a repositorios de otras instituciones. El sistema es un prototipo que demuestra la factibilidad del objetivo de integración y que está abierto a la incorporación de más conjuntos de datos, siguiendo la misma metodología empleada en el desarrollo de este proyecto. La mayoría de las funcionalidades de esta aplicación son genéricas, por lo que podrían fácilmente aplicarse a repositorios de otras instituciones. El sistema es un prototipo que demuestra la factibilidad del objetivo de integración y que está abierto a la incorporación de más conjuntos de datos, siguiendo la misma metodología empleada en el desarrollo de este proyecto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto, enmarcado dentro del campo de la web semántica, se ha desarrollado un sistema llamado littera que permite la federación de depósitos de datos enlazados. Dicho sistema permite consultar depósitos del dominio de la literatura de forma unificada a través de una interfaz web, así como construir colaborativamente un nuevo depósito de datos abiertos y enlazados.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper deals with the convergence of a remote iterative learning control system subject to data dropouts. The system is composed by a set of discrete-time multiple input-multiple output linear models, each one with its corresponding actuator device and its sensor. Each actuator applies the input signals vector to its corresponding model at the sampling instants and the sensor measures the output signals vector. The iterative learning law is processed in a controller located far away of the models so the control signals vector has to be transmitted from the controller to the actuators through transmission channels. Such a law uses the measurements of each model to generate the input vector to be applied to its subsequent model so the measurements of the models have to be transmitted from the sensors to the controller. All transmissions are subject to failures which are described as a binary sequence taking value 1 or 0. A compensation dropout technique is used to replace the lost data in the transmission processes. The convergence to zero of the errors between the output signals vector and a reference one is achieved as the number of models tends to infinity.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the problem of one-class classification (OCC) one of the classes, the target class, has to be distinguished from all other possible objects, considered as nontargets. In many biomedical problems this situation arises, for example, in diagnosis, image based tumor recognition or analysis of electrocardiogram data. In this paper an approach to OCC based on a typicality test is experimentally compared with reference state-of-the-art OCC techniques-Gaussian, mixture of Gaussians, naive Parzen, Parzen, and support vector data description-using biomedical data sets. We evaluate the ability of the procedures using twelve experimental data sets with not necessarily continuous data. As there are few benchmark data sets for one-class classification, all data sets considered in the evaluation have multiple classes. Each class in turn is considered as the target class and the units in the other classes are considered as new units to be classified. The results of the comparison show the good performance of the typicality approach, which is available for high dimensional data; it is worth mentioning that it can be used for any kind of data (continuous, discrete, or nominal), whereas state-of-the-art approaches application is not straightforward when nominal variables are present.