57 resultados para Linked Disaccharides
Resumo:
The present study investigates the potential use of non-catalyzed water-soluble blocked polyurethane prepolymer (PUP) as a bifunctional cross-linker for collagenous scaffolds. The effect of concentration (5, 10, 15 and 20%), time (4, 6, 12 and 24 h), medium volume (50, 100, 200 and 300%) and pH (7.4, 8.2, 9 and 10) over stability, microstructure and tensile mechanical behavior of acellular pericardial matrix was studied. The cross-linking index increased up to 81% while the denaturation temperature increased up to 12 °C after PUP crosslinking. PUP-treated scaffold resisted the collagenase degradation (0.167 ± 0.14 mmol/g of liberated amine groups vs. 598 ± 60 mmol/g for non-cross-linked matrix). The collagen fiber network was coated with PUP while viscoelastic properties were altered after cross-linking. The treatment of the pericardial scaffold with PUP allows (i) different densities of cross-linking depending of the process parameters and (ii) tensile properties similar to glutaraldehyde method.
Resumo:
Purpose – Linked data is gaining great interest in the cultural heritage domain as a new way for publishing, sharing and consuming data. The paper aims to provide a detailed method and MARiMbA a tool for publishing linked data out of library catalogues in the MARC 21 format, along with their application to the catalogue of the National Library of Spain in the datos.bne.es project. Design/methodology/approach – First, the background of the case study is introduced. Second, the method and process of its application are described. Third, each of the activities and tasks are defined and a discussion of their application to the case study is provided. Findings – The paper shows that the FRBR model can be applied to MARC 21 records following linked data best practices, librarians can successfully participate in the process of linked data generation following a systematic method, and data sources quality can be improved as a result of the process. Originality/value – The paper proposes a detailed method for publishing and linking linked data from MARC 21 records, provides practical examples, and discusses the main issues found in the application to a real case. Also, it proposes the integration of a data curation activity and the participation of librarians in the linked data generation process.
Resumo:
In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.
Resumo:
In this article, we argue that there is a growing number of linked datasets in different natural languages, and that there is a need for guidelines and mechanisms to ensure the quality and organic growth of this emerging multilingual data network. However, we have little knowledge regarding the actual state of this data network, its current practices, and the open challenges that it poses. Questions regarding the distribution of natural languages, the links that are established across data in different languages, or how linguistic features are represented, remain mostly unanswered. Addressing these and other language-related issues can help to identify existing problems, propose new mechanisms and guidelines or adapt the ones in use for publishing linked data including language-related features, and, ultimately, provide metrics to evaluate quality aspects. In this article we review, discuss, and extend current guidelines for publishing linked data by focusing on those methods, techniques and tools that can help RDF publishers to cope with language barriers. Whenever possible, we will illustrate and discuss each of these guidelines, methods, and tools on the basis of practical examples that we have encountered in the publication of the datos.bne.es dataset.
Resumo:
In this paper we describe the specification of amodel for the semantically interoperable representation of language resources for sentiment analysis. The model integrates "lemon", an RDF-based model for the specification of ontology-lexica (Buitelaar et al. 2009), which is used increasinglyfor the representation of language resources asLinked Data, with Marl, an RDF-based model for the representation of sentiment annotations (West-erski et al., 2011; Sánchez-Rada et al., 2013)
Resumo:
Rights expression languages declare the permitted and prohibited actions to be performed on a resource. Along this work, six rights expression languages are compared, abstracting their commonalities and outlining their underlying pattern. Linked Data, which can be object of protection by the intellectual property laws or its access be restricted by an access control system, can be the asset in rights expressions. The requirements for a pattern for licensing Linked Data resources are listed.
Resumo:
La web ha sufrido una drástica transformación en los últimos años, debido principalmente a su popularización y a la enorme cantidad de información que alberga. Debido a estos factores se ha dado el salto de la denominada Web de Documentos, a la Web Semántica, donde toda la información está relacionada con otra. Las principales ventajas de la información enlazada estriban en la facilidad de reutilización, accesibilidad y disponibilidad para ser encontrada por el usuario. En este trabajo se pretende poner de manifiesto la utilidad de los datos enlazados aplicados al ámbito geográfico y mostrar como pueden ser empleados hoy en día. Para ello se han explotado datos enlazados de carácter espacial provenientes de diferentes fuentes, a través de servidores externos o endpoints SPARQL. Además de eso se ha trabajado con un servidor privado capaz de proporcionar información enlazada almacenada en un equipo personal. La explotación de información enlazada se ha implementado en una aplicación web en lenguaje JavaScript, tratando de abstraer totalmente al usuario del tratamiento de los datos a nivel interno de la aplicación. Esta aplicación cuenta además con algunos módulos y opciones capaces de interactuar con las consultas realizadas a los servidores, consiguiendo un entorno más intuitivo y agradable para el usuario. ABSTRACT: In recent years the web has suffered a drastic transformation because of the popularization and the huge amount of stored information. Due to these factors it has gone from Documents web to Semantic web, where the data are linked. The main advantages of Linked Data lie in the ease of his reuse, accessibility and availability to be located by users. The aim of this research is to highlight the usefulness of the geographic linked data and show how can be used at present time. To get this, the spatial linked data coming from several sources have been managed through external servers or also called endpoints. Besides, it has been worked with a private server able to provide linked data stored in a personal computer. The use of linked data has been implemented in a JavaScript web application, trying completely to abstract the internally data treatment of the application to make the user ignore it. This application has some modules and options that are able to interact with the queries made to the servers, getting a more intuitive and kind environment for users.
Resumo:
Entender el desplazamiento que se ha producido, tanto en los procesos proyectuales como en la construcción, entre la arquitectura contemporánea y la arquitectura moderna a través de una serie de conceptos aplicados a la calle corredor. Para este complicado proceso comparativo se ha seleccionado la calle corredor del complejo del Híbrido Enlazado o Linked Hybrid de Steven Holl, como nuevo paradigma de calle corredor contemporánea. ABSTRACT. Understanding the shift that has occurred in both the project processes such as construction, between contemporary architecture and modern architecture through a series of concepts applied to the street corridor. For this complicated process is selected comparative Street corridor Linked Hybrid complex by Steven Holl, as a new paradigm of contemporary street corridor.
Resumo:
Sentiment analysis has recently gained popularity in the financial domain thanks to its capability to predict the stock market based on the wisdom of the crowds. Nevertheless, current sentiment indicators are still silos that cannot be combined to get better insight about the mood of different communities. In this article we propose a Linked Data approach for modelling sentiment and emotions about financial entities. We aim at integrating sentiment information from different communities or providers, and complements existing initiatives such as FIBO. The ap- proach has been validated in the semantic annotation of tweets of several stocks in the Spanish stock market, including its sentiment information.
Resumo:
We present a methodology for legacy language resource adaptation that generates domain-specific sentiment lexicons organized around domain entities described with lexical information and sentiment words described in the context of these entities. We explain the steps of the methodology and we give a working example of our initial results. The resulting lexicons are modelled as Linked Data resources by use of established formats for Linguistic Linked Data (lemon, NIF) and for linked sentiment expressions (Marl), thereby contributing and linking to existing Language Resources in the Linguistic Linked Open Data cloud.
Resumo:
The application of Linked Data technology to the publication of linguistic data promises to facilitate interoperability of these data and has lead to the emergence of the so called Linguistic Linked Data Cloud (LLD) in which linguistic data is published following the Linked Data principles. Three essential issues need to be addressed for such data to be easily exploitable by language technologies: i) appropriate machine-readable licensing information is needed for each dataset, ii) minimum quality standards for Linguistic Linked Data need to be defined, and iii) appropriate vocabularies for publishing Linguistic Linked Data resources are needed. We propose the notion of Licensed Linguistic Linked Data (3LD) in which different licensing models might co-exist, from totally open to more restrictive licenses through to completely closed datasets.
Resumo:
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
Resumo:
Enterprises are increasingly using a wide range of heterogeneous information systems for executing and governing their business activities. Even if the adoption of service orientation has improved loose coupling and reusability, applications are still isolated data silos whose integration requires complex transformations and mediations. However, by leveraging Linked Data principles those data silos can now be seamlessly integrated, and this opens the door to new data-driven approaches for Enterprise Application Integration (EAI). In this paper we present LDP4j, an open souce Java-based framework for the development of interoperable read-write Linked Data applications, based on the W3C Linked Data Platform (LDP) specification.
Resumo:
The W3C Linked Data Platform (LDP) specification defines a standard HTTP-based protocol for read/write Linked Data and pro- vides the basis for application integration using Linked Data. This paper presents an LDP adapter for the Bugzilla issue tracker and demonstrates how to use the LDP protocol to expose a traditional application as a read/write Linked Data application. This approach provides a exible LDP adoption strategy with minimal changes to existing applications.
Resumo:
Recently, experts and practitioners in language resources have started recognizing the benefits of the linked data (LD) paradigm for the representation and exploitation of linguistic data on the Web. The adoption of the LD principles is leading to an emerging ecosystem of multilingual open resources that conform to the Linguistic Linked Open Data Cloud, in which datasets of linguistic data are interconnected and represented following common vocabularies, which facilitates linguistic information discovery, integration and access. In order to contribute to this initiative, this paper summarizes several key aspects of the representation of linguistic information as linked data from a practical perspective. The main goal of this document is to provide the basic ideas and tools for migrating language resources (lexicons, corpora, etc.) as LD on the Web and to develop some useful NLP tasks with them (e.g., word sense disambiguation). Such material was the basis of a tutorial imparted at the EKAW’14 conference, which is also reported in the paper.