35 resultados para process query language
em Universidad Politécnica de Madrid
Resumo:
This paper describes an infrastructure for the automated evaluation of semantic technologies and, in particular, semantic search technologies. For this purpose, we present an evaluation framework which follows a service-oriented approach for evaluating semantic technologies and uses the Business Process Execution Language (BPEL) to define evaluation workflows that can be executed by process engines. This framework supports a variety of evaluations, from different semantic areas, including search, and is extendible to new evaluations. We show how BPEL addresses this diversity as well as how it is used to solve specific challenges such as heterogeneity, error handling and reuse
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.
Resumo:
Invariantes del pensamiento en los arquitectos de Madrid. Primera década del siglo XXI. Una historia de transmisión oral es una Tesis Doctoral que parte de la elaboración de un archivo documental inédito, archivo que aglutina los testimonios de los más importantes arquitectos de Madrid presentes durante los primeros diez años del siglo XXI. Estos testimonios se recogen ordenadamente a modo de conversaciones transcritas, reflexiones puntuales y audios. La incorporación de los audios al trabajo documental permite comprobar a futuros investigadores, de manera directa, la certeza de las conclusiones o, incluso, establecer interpretaciones distintas. Los documentos sonoros son el germen de este trabajo. La Tesis Doctoral ordena e interpreta los testimonios en su apartado de ANÁLISIS/ DESARROLLO permitiendo entender a través de constantes demostraciones un hilo conductor del pensamiento en los arquitectos de Madrid. Se trata de una Tesis Doctoral que se entiende como un documento vivo, abierto y que gracias a su carácter inédito descubre matices y reflexiones de los arquitectos nunca antes recogidos en otros trabajos. Se ha conseguido reunir y ordenar por primera vez la voz y el pensamiento de los más importantes arquitectos de Madrid, muchos de ellos ya fallecidos. Se ha establecido un árbol genealógico ordenado y completo de los arquitectos de referencia indiscutibles desde el año 1939. Se ha conseguido reunir en un solo documento a los arquitectos y personajes más citados y recurrentes en el discurso de los arquitectos de Madrid, pudiendo constatar así sus referentes culturales más utilizados. Se ha descubierto y argumentado un pensamiento común dividido en cuatro conceptos: Oportunidad, Orden, Compromiso y Contención. Se produce una aproximación al arquitecto y a su pensamiento de la manera más natural y espontánea. Las grabaciones nos permitirán introducirnos, sin imposturas, no solo en el fondo, sino también en la forma de lo que se comunica, en el cómo. Una vez seleccionados los documentos sonoros más adecuados para el objetivo de este trabajo se ha procedido a su transcripción al papel. En este proceso se depura el lenguaje y se liman defectos de forma, al mismo tiempo se resumen las conversaciones y se recogen solo los comentarios más interesantes. En el proceso de transcripción, así como en la elección de las preguntas, existe una labor editorial, la aplicación de un criterio a la hora de seleccionar, resumir, corregir, completar, etc. Para abordar las conversaciones se ha recurrido a la bibliografía de referencia de cada uno de los arquitectos. Una vez transcritas las entrevistas, se establece una valoración crítica, una aproximación teórica al tema principal de la conversación. Bien puede tratarse de una reflexión sobre el arquitecto y su trabajo o sobre algunas de las opiniones o temas vertidos durante la charla. Para situar a cada uno de los arquitectos que se citan, se ha establecido un apartado genealógico completo donde cada uno se coloca en su lugar correspondiente conforme a sus apariciones en los textos principales de la historiografía reciente desde Carlos Flores hasta el año 2010. Esta Tesis Doctoral es un testigo de la diversidad de pensamientos y actitudes así como de las coincidencias. Las conversaciones mantenidas, las reflexiones y las opiniones vertidas al respecto han tocado reiteradamente muchos temas que aparecerán ordenados en el ANÁLISIS/DESARROLLO y en el apartado DESCRIPTORES. A partir de estos y otros temas genéricos, los pensamientos se aglutinan en torno a cuatro puntos que resumen las actitudes y los planteamientos conceptuales más recurrentes y coincidentes. Estos cuatro puntos definen de una manera concreta al arquitecto de Madrid. Oportunidad, Orden, Compromiso y Contención. Identidad generada a través de una historia de transmisión oral, desde los arquitectos de las primeras generaciones de posguerra hasta hoy. ABSTRACT Invariable thought in the architects of Madrid. First decade of the XXI century. A history of oral transmission is a Doctoral Thesis that starts with the development of a new documentary file, a file which brings together the testimonies of the most important architects of Madrid present during the first ten years of this century. These testimonies include conversations, punctual reflections and audio bites. Incorporating audio allows future researchers to check certain conclusions directly or even to have different interpretations. Sound bites are the seed of this work. This Doctoral Thesis orders and interprets the testimonies in the ANALYSIS / DEVELOPMENT section, through which a common thread of thought in Madrid architects can be ascertained. This doctoral thesis is meant as a living, open document, which, thanks to its unprecedented nature, discovers nuances and reflections of architects that had never been collected in previous studies. It has brought together and sorted for the first time the voice and thoughts of the most important architects of Madrid, many of them already deceased. It has established an orderly and comprehensive reference guide to the most important architects since 1939. It has brought together in one document the most cited and recurring architects and characters in the discourse of the architects of Madrid, which enables us to observe the cultural references they used the most. We have discovered and put forward a common thought divided into four concepts: Opportunity, Order, Commitment and Containment. The architect and his thoughts are revealed as naturally and spontaneously as possible. The recordings allow us to ascertain, without impositions, not only the substance but also the form of what is communicated. After selecting the most appropriate sound bites for the purpose of this work, we have proceeded to transcribe them to paper. In this process the language has been purified and formal defects have been dealt with, and at the same time the conversations have been summarized and only the most interesting comments have been kept. The transcription process and the choice of questions entails editorial work, applying a criterion when selecting, summarizing, amending, supplementing, etc. To address the conversations, the bibliographic reference of each of the architects has been used. Once the interviews have been transcribed, a critical appraisal and a theoretical approach to the main topic of conversation are established. It may be a reflection on the architect and his work or some of the views or issues discussed during the talk. In order to place each of the cited architects, a complete family tree has been devised in which each architect is situated according to his appearances in the main text of recent historiography, from Carlos Flores until 2010. This Doctoral Thesis is a witness to the diversity of thoughts, attitudes and coincidences. The conversations, reflections and opinions expressed in this regard have repeatedly touched many issues that will be sorted in the ANALYSIS/DEVELOPMENT and the KEYWORDS section. From these and other generic issues, thoughts coalesce around four points which summarize the attitudes and the most recurrent and similar conceptual approaches. These four points define the architect of Madrid in a concrete way. Opportunity, order, engagement and containment. An identity generated by a history of oral transmission, from the architects of the first post-war generations until today.
Resumo:
This paper proposes the use of Factored Translation Models (FTMs) for improving a Speech into Sign Language Translation System. These FTMs allow incorporating syntactic-semantic information during the translation process. This new information permits to reduce significantly the translation error rate. This paper also analyses different alternatives for dealing with the non-relevant words. The speech into sign language translation system has been developed and evaluated in a specific application domain: the renewal of Identity Documents and Driver’s License. The translation system uses a phrase-based translation system (Moses). The evaluation results reveal that the BLEU (BiLingual Evaluation Understudy) has improved from 69.1% to 73.9% and the mSER (multiple references Sign Error Rate) has been reduced from 30.6% to 24.8%.
Resumo:
This paper describes the participation of DAEDALUS at the LogCLEF lab in CLEF 2011. This year, the objectives of our participation are twofold. The first topic is to analyze if there is any measurable effect on the success of the search queries if the native language and the interface language chosen by the user are different. The idea is to determine if this difference may condition the way in which the user interacts with the search application. The second topic is to analyze the user context and his/her interaction with the system in the case of successful queries, to discover out any relation among the user native language, the language of the resource involved and the interaction strategy adopted by the user to find out such resource. Only 6.89% of queries are successful out of the 628,607 queries in the 320,001 sessions with at least one search query in the log. The main conclusion that can be drawn is that, in general for all languages, whether the native language matches the interface language or not does not seem to affect the success rate of the search queries. On the other hand, the analysis of the strategy adopted by users when looking for a particular resource shows that people tend to use the simple search tool, frequently first running short queries build up of just one specific term and then browsing through the results to locate the expected resource
Resumo:
This paper describes a categorization module for improving the performance of a Spanish into Spanish Sign Language (LSE) translation system. This categorization module replaces Spanish words with associated tags. When implementing this module, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not relevant in the translation process. The categorization module has been incorporated into a phrase-based system and a Statistical Finite State Transducer (SFST). The evaluation results reveal that the BLEU has increased from 69.11% to 78.79% for the phrase-based system and from 69.84% to 75.59% for the SFST.
Resumo:
We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.
Resumo:
We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.
Resumo:
A range of methodologies and techniques are available to guide the design and implementation of language extensions and domainspecific languages. A simple yet powerful technique is based on source-tosource transformations interleaved across the compilation passes of a base language. Despite being a successful approach, it has the main drawback that the input source code is lost in the process. When considering the whole workflow of program development (warning and error reporting, debugging, or even program analysis), program translations are no more powerful than a glorified macro language. In this paper, we propose an augmented approach to language extensions for Prolog, where symbolic annotations are included in the target program. These annotations allow selectively reversing the translated code. We illustrate the approach by showing that coupling it with minimal extensions to a generic Prolog debugger allows us to provide users with a familiar, source-level view during the debugging of programs which use a variety of language extensions, such as functional notation, DCGs, or CLP{Q,R}.
Resumo:
The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process.
Resumo:
RDB2RDF systems generate RDF from relational databases, operating in two dierent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.
Resumo:
RDB2RDF systems generate RDF from relational databases, operating in two di�erent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.
Resumo:
Sensor networks are increasingly being deployed in the environment for many different purposes. The observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse this data, for other purposes than those for which they were originally set up. The authors propose an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. In this article, the authors describe the theoretical foundations and technologies that enable exposing semantically enriched sensor metadata, and querying sensor observations through SPARQL extensions, using query rewriting and data translation techniques according to mapping languages, and managing both pull and push delivery modes.
Resumo:
The student exchange programs being carried out at universities for over 50 years, have led to changes in the institutions, which had to adapt to accommodate these students. Despite those changes, the integration of foreign students not coming from the aforementioned exchange programs that come to our country to study at the University has been neglected. These students face many barriers (language, cultural and origin customs mainly), so a clear and detailed information would be highly desirable in order to facilitate the necessary arrangements This study aims to show the deficiencies in the integration process and hosting programs faced by a foreign student at University. The study is performed by means of an analysis of statistical data from the Polytechnic University of Madrid and the Civil Engineering School over the last 12 school years (1999 - 2000 to 2010 - 2011), as well as surveys and interviews with some of these students. The study is enhanced with the analysis of the measures and integration methods of the various minorities, which had been implemented by the foremost public universities in Spain, as well as other public and private universities abroad. It illustrates the existing backlog at the Spanish universities with regards to supporting the integration of diversity among foreign students, providing data concerning the growth of such population and its impact at the university, and on the institutions in particular. In an increasingly globalized world, we must understand and facilitate the integration of minorities at University, supplying them, from the first day, and before the enrollment process, the essential elements that will allow their adequate adaptation to the educational process at University. It concludes by identifying the main subjects that need to be tackled to endorse such integration.