64 resultados para query rewriting


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Starting from the way the inter-cellular communication takes place by means of protein channels and also from the standard knowledge about neuron functioning, we propose a computing model called a tissue P system, which processes symbols in a multiset rewriting sense, in a net of cells similar to a neural net. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. Such cell nets are shown to be rather powerful: they can simulate a Turing machine even when using a small number of cells, each of them having a small number of states. Moreover, in the case when each cell works in the maximal manner and it can excite all the cells to which it can send impulses, then one can easily solve the Hamiltonian Path Problem in linear time. A new characterization of the Parikh images of ET0L languages are also obtained in this framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications?it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ?Activity Monitor? has been designed and implemented: a personal health-persuasive application that provides feedback on the user?s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user?s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results: We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions: CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we propose a variant of P system based on the rewriting of string-objects by means of evolutionary rules. The membrane structure of such a P system seems to be a very natural tool for simulating the filters in accepting networks of evolutionary processors with filtered connections. We discuss an informal construction supporting this simulation. A detailed proof is to be considered in an extended version of this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El proyecto que he realizado ha consistido en la creación de un sistema de información geográfica para el Campus Sur UPM, que puede servir de referencia para su implantación en cualquier otro campus universitario. Esta idea surge de la necesidad por parte de los usuarios de un campus de disponer de una herramienta que les permita consultar la información de los distintos lugares y servicios del campus, haciendo especial hincapié en su localización geográfica. Para ello ha sido necesario estudiar las tecnologías actuales que permiten implementar un sistema de información geográfica, dando lugar al sistema propuesto, que consiste en un conjunto de medios informáticos (hardware y software), que van a permitir al personal del campus obtener la información y localización de los elementos del campus desde su móvil. Tras realizar un análisis de los requisitos y funcionalidades que debía tener el sistema, el proyecto ha consistido en el diseño e implementación de dicho sistema. La información a consultar estará almacenada y disponible para su consulta en un equipo servidor accesible para el personal del campus. Para ello, durante la realización del proyecto, ha sido necesario crear un modelo de datos basado en el campus y cargar los datos geográficos de utilidad en una base de datos. Todo esto ha sido realizado mediante el producto software Smallword Core 4.2. Además, ha sido también necesario desplegar un software servidor que permita a los usuarios consultar dichos datos desde sus móviles vía WIFI o Internet, el producto utilizado para este fin ha sido Smallworld Geospatial Server 4.2. Para la realización de las consultas se han utilizado los servicios WMS(Web Map Service) y WFS(Web Feature Service) definidos por el OGC(Open Geospatial Consortium). Estos servicios están adaptados para la consulta de información geográfica. El sistema también está compuesto por una aplicación para dispositivos móviles con sistema operativo Android, que permite a los usuarios del sistema consultar y visualizar la información geográfica del campus. Dicha aplicación ha sido diseñada y programada a lo largo de la realización del proyecto. Para la realización de este proyecto también ha sido necesario un estudio del presupuesto que supondría una implantación real del sistema y el mantenimiento que implicaría tener el sistema actualizado. Por último, el proyecto incluye una breve descripción de las tecnologías futuras que podrían mejorar las funcionalidades del sistema: la realidad aumentada y el posicionamiento en el interior de edificios. ABSTRACT. The project I've done has been to create a geographic information system for the Campus Sur UPM, which can serve as a reference for implementation in any other college campus. This idea arises from the need for the campus users to have a tool that allows them to view information from different places and services, with particular emphasis on their geographical location. It has been necessary to study the current technologies that allow implementing a geographic information system, leading to the proposed system, which consists of a set of computer resources (hardware and software) that will allow campus users to obtain information and location of campus components from their mobile phones. Following an analysis of the requirements and functionalities that the system should have, the project involved the design and implementation of the system . The information will be stored and available on a computer server accessible to campus users. Accordingly, during the project, it was necessary to create a data model based on campus data and load this data in a database. All this has been done by Smallword Core 4.2 software product. In addition, it has also been necessary to deploy a server software that allows users to query the data from their phones via WIFI or Internet, the product used for this purpose has been Smallworld Geospatial Server 4.2 . To carry out the consultations have used the services WMS (Web Map Service) and WFS (Web Feature Service) defined by the OGC (Open Geospatial Consortium). These services are tailored to the geographic information retrieval. The system also consists of an application for mobile devices with Android operating system, which allows users to query and display geographic information related to the campus. This application has been designed and programmed over the project. For the realization of this project has also been necessary to study the budget that would be a real system implementation and the maintenance that would have the system updated. Finally, the project includes a brief description of future technologies that could improve the system's functionality: augmented reality and positioning inside the buildings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The security event correlation scalability has become a major concern for security analysts and IT administrators when considering complex IT infrastructures that need to handle gargantuan amounts of events or wide correlation window spans. The current correlation capabilities of Security Information and Event Management (SIEM), based on a single node in centralized servers, have proved to be insufficient to process large event streams. This paper introduces a step forward in the current state of the art to address the aforementioned problems. The proposed model takes into account the two main aspects of this ?eld: distributed correlation and query parallelization. We present a case study of a multiple-step attack on the Olympic Games IT infrastructure to illustrate the applicability of our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While designing systems and products requires a deep understanding of influences that achieve desirable performance, the need for an efficient and systematic decision-making approach drives the need for optimization strategies. This paper provides the motivation for this topic as well as a description of applications in Computing Center of Madrid city Council. Optimization applications can be found in almost all areas of engineering. Typical problems in process, working with a database, arise in query design, entity model design and concurrent processes. This paper proposes a solution to optimize a night process dealing with millions of records with an overall performance of about eight times in computation time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given the sustained growth that we are experiencing in the number of SPARQL endpoints available, the need to be able to send federated SPARQL queries across these has also grown. To address this use case, the W3C SPARQL working group is defining a federation extension for SPARQL 1.1 which allows for combining graph patterns that can be evaluated over several endpoints within a single query. In this paper, we describe the syntax of that extension and formalize its semantics. Additionally, we describe how a query evaluation system can be implemented for that federation extension, describing some static optimization techniques and reusing a query engine used for data-intensive science, so as to deal with large amounts of intermediate and final results. Finally we carry out a series of experiments that show that our optimizations speed up the federated query evaluation process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El trabajo ha sido realizado dentro del marco de los proyectos EURECA (Enabling information re-Use by linking clinical REsearch and Care) e INTEGRATE (Integrative Cancer Research Through Innovative Biomedical Infrastructures), en los que colabora el Grupo de Informática Biomédica de la UPM junto a otras universidades e instituciones sanitarias europeas. En ambos proyectos se desarrollan servicios e infraestructuras con el objetivo principal de almacenar información clínica, procedente de fuentes diversas (como por ejemplo de historiales clínicos electrónicos de hospitales, de ensayos clínicos o artículos de investigación biomédica), de una forma común y fácilmente accesible y consultable para facilitar al máximo la investigación de estos ámbitos, de manera colaborativa entre instituciones. Esta es la idea principal de la interoperabilidad semántica en la que se concentran ambos proyectos, siendo clave para el correcto funcionamiento del software del que se componen. El intercambio de datos con un modelo de representación compartido, común y sin ambigüedades, en el que cada concepto, término o dato clínico tendrá una única forma de representación. Lo cual permite la inferencia de conocimiento, y encaja perfectamente en el contexto de la investigación médica. En concreto, la herramienta a desarrollar en este trabajo también está orientada a la idea de maximizar la interoperabilidad semántica, pues se ocupa de la carga de información clínica con un formato estandarizado en un modelo común de almacenamiento de datos, implementado en bases de datos relacionales. El trabajo ha sido desarrollado en el periodo comprendido entre el 3 de Febrero y el 6 de Junio de 2014. Se ha seguido un ciclo de vida en cascada para la organización del trabajo realizado en las tareas de las que se compone el proyecto, de modo que una fase no puede iniciarse sin que se haya terminado, revisado y aceptado la fase anterior. Exceptuando la tarea de documentación del trabajo (para la elaboración de esta memoria), que se ha desarrollado paralelamente a todas las demás. ----ABSTRACT--- The project has been developed during the second semester of the 2013/2014 academic year. This Project has been done inside EURECA and INTEGRATE European biomedical research projects, where the GIB (Biomedical Informatics Group) of the UPM works as a partner. Both projects aim is to develop platforms and services with the main goal of storing clinical information (e.g. information from hospital electronic health records (EHRs), clinical trials or research articles) in a common way and easy to access and query, in order to support medical research. The whole software environment of these projects is based on the idea of semantic interoperability, which means the ability of computer systems to exchange data with unambiguous and shared meaning. This idea allows knowledge inference, which fits perfectly in medical research context. The tool to develop in this project is also "semantic operability-oriented". Its purpose is to store standardized clinical information in a common data model, implemented in relational databases. The project has been performed during the period between February 3rd and June 6th, of 2014. It has followed a "Waterfall model" of software development, in which progress is seen as flowing steadily downwards through its phases. Each phase starts when its previous phase has been completed and reviewed. The task of documenting the project‟s work is an exception; it has been performed in a parallel way to the rest of the tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El trabajo presentado a lo largo de este documento es el resultado del TFG1 realizado por Israel Suárez Santiago, alumno de la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM). Dicho trabajo tiene como finalidad proporcionar una herramienta que, basada en estándares previamente estudiados, permita la fácil creación y gestión de plantillas de mensajes HL7v32 a las que posteriormente se le añadirán datos clínicos que serán insertados en una base de datos para su fácil acceso y consulta. La herramienta desarrollada únicamente facilita una serie de opciones para la creación de la plantilla en sí, que servirá como base para la creación de mensajes HL7v3, es decir, no permite la inclusión de datos específicos en las plantillas generadas, que deberá hacerse con alguna herramienta externa o bien manualmente. Las plantillas generadas por la herramienta se basan principalmente en el estándar CDA3, que proporciona una amplia guía para la correcta generación de mensajes HL7v3. La herramienta garantiza que las plantillas resultantes estarán correctamente formadas, siendo acordes al estándar anteriormente citado y siendo, además, sintácticamente correctas, es decir, el documento .xml generado no contendrá errores. ---ABSTRACT---This document is the result of the TFG developed by Israel Suárez Santiago, student of Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) of the Universidad Politécnica de Madrid (UPM). This work aims to offer a tool based on standards that can facilitate and manage the creation of HL7v3 templates. Clinical data will be added to those templates in order to load them into a database and query them fast and easily. The tool only facilitates several options to create the template, that will be used to generate the HL7v3 messages, but it does not permit the inclusion of data on them. The inclusion of data will be done manually or using an external tool. The generated templates are based mainly on the CDA1 standard, that provides a widely guide to create HL7v32 messages. The tool guarantees that the resulting templates have been correctly generated, following the previous standard and with no errors in the .xml document generated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo desarrolla un servicio REST que transforma frases en lenguaje natural a grafos RDF. Los grafos generados son grafos dirigidos, donde los nodos se forman con los sustantivos o adjetivos de las frases, y los arcos se forman con los verbos. Se utiliza dentro del proyecto p-medicine para dar soporte a las siguientes funcionalidades: Búsquedas en lenguaje natural: actualmente la plataforma p-medicine proporciona un interfaz programático para realizar consultas en SPARQL. El servicio desarrollado permitiría generar esas consultas automáticamente a partir de frases en lenguaje natural. Anotaciones de bases de datos mediante lenguaje natural: la plataforma pmedicine incorpora una herramienta, desarrollada por el Grupo de Ingeniería Biomédica de la Universidad Politécnica de Madrid, para la anotación de bases de datos RDF. Estas anotaciones son necesarias para la posterior traducción de las bases de datos a un esquema central. El proceso de anotación requiere que el usuario construya de forma manual las vistas RDF que desea anotar, lo que requiere mostrar gráficamente el esquema RDF y que el usuario construya vistas RDF seleccionando las clases y relaciones necesarias. Este proceso es a menudo complejo y demasiado difícil para un usuario sin perfil técnico. El sistema se incorporará para permitir que la construcción de estas vistas se realice con lenguaje natural. ---ABSTRACT---The present work develops a REST service that transforms natural language sentences to RDF degrees. Generated graphs are directed graphs where nodes are formed with nouns or adjectives of phrases, and the arcs are formed with verbs. Used within the p-medicine project to support the following functionality: Natural language queries: currently the p-medicine platform provides a programmatic interface to query SPARQL. The developed service would automatically generate those queries from natural language sentences. Memos databases using natural language: the p-medicine platform incorporates a tool, developed by the Group of Biomedical Engineering at the Polytechnic University of Madrid, for the annotation of RDF data bases. Such annotations are necessary for the subsequent translation of databases to a central scheme. The annotation process requires the user to manually construct the RDF views that he wants annotate, requiring graphically display the RDF schema and the user to build RDF views by selecting classes and relationships. This process is often complex and too difficult for a user with no technical background. The system is incorporated to allow the construction of these views to be performed with natural language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vivimos en la era de la información y del internet, tenemos la necesidad cada vez mayor de conseguir y compartir la información que existe. Esta necesidad se da en todos los ámbitos existentes pero con más ahínco probablemente sea en el área de la medicina, razón por la cual se llevan a cabo muchas investigaciones de distinta índole, lo cual ha llevado a generar un cantidad inimaginable de información y esta su vez muy heterogénea, haciendo cada vez más difícil unificarla y sacar conocimiento o valor agregado. Por lo cual se han llevado a cabo distintas investigaciones para dar solución a este problema, quizás la más importante y con más crecimiento es la búsqueda a partir de modelos de ontologías mediante el uso de sistemas que puedan consultarla. Este trabajo de Fin de Master hace hincapié es la generación de las consultas para poder acceder a la información que se encuentra de manera distribuida en distintos sitios y de manera heterogénea, mediante el uso de una API que genera el código SPARQL necesario. La API que se uso fue creada por el grupo de informática biomédica. También se buscó una manera eficiente de publicar esta API para su futuro uso en el proyecto p-medicine, por lo cual se creó un servicio RESTful para permitir generar las consultas deseadas desde cualquier plataforma, haciendo en esto caso más accesible y universal. Se le dio también una interfaz WEB a la API que permitiera hacer uso de la misma de una manera más amigable para el usuario. ---ABSTRACT---We live in the age of information and Internet so we have the need to consult and share the info that exists. This need comes is in every scope of our lives, probably one of the more important is the medicine, because it is the knowledge area that treats diseases and it tries to extents the live of the human beings. For that reason there have been many different researches generating huge amounts of heterogeneous and distributed information around the globe and making the data more difficult to consult. Consequently there have been many researches to look for an answer about to solve the problem of searching heterogeneous and distributed data, perhaps the more important if the one that use ontological models. This work is about the generation of the query statement based on the mapping API created by the biomedical informatics group. At the same time the project looks for the best way to publish and make available the API for its use in the p-medicine project, for that reason a RESTful API was made to allow the generation of consults from within the platform, becoming much more accessible and universal available. A Web interface was also made to the API, to let access to the final user in a friendly

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este Proyecto Fin de Grado está enmarcado dentro de las actividades del GRyS (Grupo de Redes y Servicios de Próxima Generación) con las Smart Grids. En la investigación actual sobre Smart Grids se pretenden alcanzar los siguientes objetivos: . Integrar fuentes de energías renovables de manera efectiva. . Aumentar la eficiencia en la gestión de la demanda y suministro de forma dinámica. . Reducir las emisiones de CO2 dando prioridad a fuentes de energía verdes. . Concienciar del consumo de energía mediante la monitorización de dispositivos y servicios. . Estimular el desarrollo de un mercado vanguardista de tecnologías energéticamente eficientes con nuevos modelos de negocio. Dentro del contexto de las Smart Grids, el interés del GRyS se extiende básicamente a la creación de middlewares semánticos y tecnologías afines, como las ontologías de servicios y las bases de datos semánticas. El objetivo de este Proyecto Fin de Grado ha sido diseñar y desarrollar una aplicación para dispositivos con sistema operativo Android, que implementa una interfaz gráfica y los métodos necesarios para obtener y representar información de registro de servicios de una plataforma SOA (Service-Oriented Architecture). La aplicación permite: . Representar información relativa a los servicios y dispositivos registrados en una Smart Grid. . Guardar, cargar y compartir por correo electrónico ficheros HTML con la información anterior. . Representar en un mapa la ubicación de los dispositivos. . Representar medidas (voltaje, temperatura, etc.) en tiempo real. . Aplicar filtros por identificador de dispositivo, modelo o fabricante. . Realizar consultas SPARQL a bases de datos semánticas. . Guardar y cagar consultas SPARQL en ficheros de texto almacenados en la tarjeta SD. La aplicación, desarrollada en Java, es de código libre y hace uso de tecnologías estándar y abiertas como HTML, XML, SPARQL y servicios RESTful. Se ha tenido ocasión de probarla con la infraestructura del proyecto europeo e-Gotham (Sustainable-Smart Grid Open System for the Aggregated Control, Monitoring and Management of Energy), en el que participan 17 socios de 5 países: España, Italia, Estonia, Finlandia y Noruega. En esta memoria se detalla el estudio realizado sobre el Estado del arte y las tecnologías utilizadas en el desarrollo del proyecto, la implementación, diseño y arquitectura de la aplicación, así como las pruebas realizadas y los resultados obtenidos. ABSTRACT. This Final Degree Project is framed within the activities of the GRyS (Grupo de Redes y Servicios de Próxima Generación) with the Smart Grids. Current research on Smart Grids aims to achieve the following objectives: . To effectively integrate renewable energy sources. . To increase management efficiency by dynamically matching demand and supply. . To reduce carbon emissions by giving priority to green energy sources. . To raise energy consumption awareness by monitoring products and services. . To stimulate the development of a leading-edge market for energy-efficient technologies with new business models. Within the context of the Smart Grids, the interest of the GRyS basically extends to the creation of semantic middleware and related technologies, such as service ontologies and semantic data bases. The objective of this Final Degree Project has been to design and develop an application for devices with Android operating system, which implements a graphical interface and methods to obtain and represent services registry information in a Service-Oriented Architecture (SOA) platform. The application allows users to: . Represent information related to services and devices registered in a Smart Grid. . Save, load and share HTML files with the above information by email. . Represent the location of devices on a map. . Represent measures (voltage, temperature, etc.) in real time. . Apply filters by device id, model or manufacturer. . SPARQL query semantic database. . Save and load SPARQL queries in text files stored on the SD card. The application, developed in Java, is open source and uses open standards such as HTML, XML, SPARQL and RESTful services technologies. It has been tested in a real environment using the e-Gotham European project infrastructure (Sustainable-Smart Grid Open System for the Aggregated Control, Monitoring and Management of Energy), which is participated by 17 partners from 5 countries: Spain, Italy, Estonia, Finland and Norway. This report details the study on the State of the art and the technologies used in the development of the project, implementation, design and architecture of the application, as well as the tests performed and the results obtained.