16 resultados para GRASP filtering
em Universidad Politécnica de Madrid
Resumo:
In this paper, we propose a particle filtering (PF) method for indoor tracking using radio frequency identification (RFID) based on aggregated binary measurements. We use an Ultra High Frequency (UHF) RFID system that is composed of a standard RFID reader, a large set of standard passive tags whose locations are known, and a newly designed, special semi-passive tag attached to an object that is tracked. This semi-passive tag has the dual ability to sense the backscatter communication between the reader and other passive tags which are in its proximity and to communicate this sensed information to the reader using backscatter modulation. We refer to this tag as a sense-a-tag (ST). Thus, the ST can provide the reader with information that can be used to determine the kinematic parameters of the object on which the ST is attached. We demonstrate the performance of the method with data obtained in a laboratory environment.
Resumo:
This thesis proposes how to apply the Semantic Web tech- nologies for the Idea Management Systems to deliver a solution to knowl- edge management and information over ow problems. Firstly, the aim is to present a model that introduces rich metadata annotations and their usage in the domain of Idea Management Systems. Furthermore, the the- sis shall investigate how to link innovation data with information from other systems and use it to categorize and lter out the most valuable elements. In addition, the thesis presents a Generic Idea and Innovation Management Ontology (Gi2MO) and aims to back its creation with a set of case studies followed by evaluations that prove how Semantic Web can work as tool to create new opportunities and leverage the contemporary Idea Management legacy systems into the next level.
Resumo:
Systems relying on fixed hardware components with a static level of parallelism can suffer from an underuse of logical resources, since they have to be designed for the worst-case scenario. This problem is especially important in video applications due to the emergence of new flexible standards, like Scalable Video Coding (SVC), which offer several levels of scalability. In this paper, Dynamic and Partial Reconfiguration (DPR) of modern FPGAs is used to achieve run-time variable parallelism, by using scalable architectures where the size can be adapted at run-time. Based on this proposal, a scalable Deblocking Filter core (DF), compliant with the H.264/AVC and SVC standards has been designed. This scalable DF allows run-time addition or removal of computational units working in parallel. Scalability is offered together with a scalable parallelization strategy at the macroblock (MB) level, such that when the size of the architecture changes, MB filtering order is modified accordingly
Resumo:
Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.
Resumo:
El proyecto, construcción y explotación de los túneles de Calle 30, la vía urbana de circunvalación más importante de la ciudad de Madrid, ha supuesto un importante reto por multitud de factores característicos entre los los que cabe citar las restricciones geométricas, el entorno de la vía y la composición del tráfico. Ésto ha quedado reflejado en el hecho de que, tan pronto como se ha puesto en servicio, la infraestructura se ha convertido en una referencia internacional. En el ámbito de la seguridad, partiendo de un enfoque global que contempla el conjunto de medidas disponibles se han utilizado las más modernas tecnologías disponibles aplicando, a su vez, procedimientos de trabajo y metodologías. En particular, ha sido especialmente destacable la persistencia en mantener, como objetivo principal, la definicición de criterios de proyecto homogéneos y coherentes que permitiesen explotar la compleja red de túneles como una única infraestructura. En el caso del sistema de ventilación, para cumplir estos objetivos se ha realizado un enorme esfuerzo de coordinación y homogeneización de criterios lo que, junto a la utilización de novedosas tecnologías, ha supuesto un apasionante desafío. Como resultado, el presenta artículoo, partiendo de la exposición de los criterios asociados a la solución conceptual, profundiza en aquellos aspectos que, por su novedad, se consideran de interés para el lector.
Resumo:
As the use of recommender systems becomes more consolidated on the Net, an increasing need arises to develop some kind of evaluation framework for collaborative filtering measures and methods which is capable of not only testing the prediction and recommendation results, but also of other purposes which until now were considered secondary, such as novelty in the recommendations and the users? trust in these. This paper provides: (a) measures to evaluate the novelty of the users? recommendations and trust in their neighborhoods, (b) equations that formalize and unify the collaborative filtering process and its evaluation, (c) a framework based on the above-mentioned elements that enables the evaluation of the quality results of any collaborative filtering applied to the desired recommender systems, using four graphs: quality of the predictions, the recommendations, the novelty and the trust.
Resumo:
Recommender systems play an important role in reducing the negative impact of informa- tion overload on those websites where users have the possibility of voting for their prefer- ences on items. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to cal- culate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corroborate the excellent behaviour of the singularity measure proposed.
Resumo:
The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.
Resumo:
Collaborative filtering recommender systems contribute to alleviating the problem of information overload that exists on the Internet as a result of the mass use of Web 2.0 applications. The use of an adequate similarity measure becomes a determining factor in the quality of the prediction and recommendation results of the recommender system, as well as in its performance. In this paper, we present a memory-based collaborative filtering similarity measure that provides extremely high-quality and balanced results; these results are complemented with a low processing time (high performance), similar to the one required to execute traditional similarity metrics. The experiments have been carried out on the MovieLens and Netflix databases, using a representative set of information retrieval quality measures.
Resumo:
The understanding of the embryogenesis in living systems requires reliable quantitative analysis of the cell migration throughout all the stages of development. This is a major challenge of the "in-toto" reconstruction based on different modalities of "in-vivo" imaging techniques -spatio-temporal resolution and image artifacts and noise. Several methods for cell tracking are available, but expensive manual interaction -time and human resources- is always required to enforce coherence. Because of this limitation it is necessary to restrict the experiments or assume an uncontrolled error rate. Is it possible to obtain automated reliable measurements of migration? can we provide a seed for biologists to complete cell lineages efficiently? We propose a filtering technique that considers trajectories as spatio-temporal connected structures that prunes out those that might introduce noise and false positives by using multi-dimensional morphological operators.
Resumo:
Los sistemas de recomendación son un tipo de solución al problema de sobrecarga de información que sufren los usuarios de los sitios web en los que se pueden votar ciertos artículos. El sistema de recomendación de filtrado colaborativo es considerado como el método con más éxito debido a que sus recomendaciones se hacen basándose en los votos de usuarios similares a un usuario activo. Sin embargo, el método de filtrado de colaboración tradicional selecciona usuarios insuficientemente representativos como vecinos de cada usuario activo. Esto significa que las recomendaciones hechas a posteriori no son lo suficientemente precisas. El método propuesto en esta tesis realiza un pre-filtrado del proceso, mediante el uso de dominancia de Pareto, que elimina los usuarios menos representativos del proceso de selección k-vecino y mantiene los más prometedores. Los resultados de los experimentos realizados en MovieLens y Netflix muestran una mejora significativa en todas las medidas de calidad estudiadas en la aplicación del método propuesto. ABSTRACTRecommender systems are a type of solution to the information overload problem suffered by users of websites on which they can rate certain items. The Collaborative Filtering Recommender System is considered to be the most successful approach as it make its recommendations based on votes of users similar to an active user. Nevertheless, the traditional collaborative filtering method selects insufficiently representative users as neighbors of each active user. This means that the recommendations made a posteriori are not precise enough. The method proposed in this thesis performs a pre-filtering process, by using Pareto dominance, which eliminates the less representative users from the k-neighbor selection process and keeps the most promising ones. The results from the experiments performed on Movielens and Netflix show a significant improvement in all the quality measures studied on applying the proposed method.
Resumo:
One of the main concerns of evolvable and adaptive systems is the need of a training mechanism, which is normally done by using a training reference and a test input. The fitness function to be optimized during the evolution (training) phase is obtained by comparing the output of the candidate systems against the reference. The adaptivity that this type of systems may provide by re-evolving during operation is especially important for applications with runtime variable conditions. However, fully automated self-adaptivity poses additional problems. For instance, in some cases, it is not possible to have such reference, because the changes in the environment conditions are unknown, so it becomes difficult to autonomously identify which problem requires to be solved, and hence, what conditions should be representative for an adequate re-evolution. In this paper, a solution to solve this dependency is presented and analyzed. The system consists of an image filter application mapped on an evolvable hardware platform, able to evolve using two consecutive frames from a camera as both test and reference images. The system is entirely mapped in an FPGA, and native dynamic and partial reconfiguration is used for evolution. It is also shown that using such images, both of them being noisy, as input and reference images in the evolution phase of the system is equivalent or even better than evolving the filter with offline images. The combination of both techniques results in the completely autonomous, noise type/level agnostic filtering system without reference image requirement described along the paper.
Resumo:
Tradicionalmente, el foco de atención en el desarrollo de una arquitectura software se ha centrado en los componentes, relegando a un segundo plano las formas de interacción entre estos componentes: los conectores. Sin embargo, para que un sistema funcione correctamente es necesario dedicar tanta atención a los conectores como a los componentes. En este trabajo presentamos un estudio sobre la herramienta ArchStudio 3.0. El análisis se ha centrado en las capacidades de dicha herramienta para soportar la comunicación entre componentes mediante paso de mensajes. Sobre dicha herramienta se han realizado correcciones en el código, se han rediseñado algunos de sus elementos para mejorar la eficiencia y se ha diseñado e implementado la política de filtrado C2 conocida como message filtering.
Resumo:
In this work we address a scenario where 3D content is transmitted to a mobile terminal with 3D display capabilities. We consider the use of 2D plus depth format to represent the 3D content and focus on the generation of synthetic views in the terminal. We evaluate different types of smoothing filters that are applied to depth maps with the aim of reducing the disoccluded regions. The evaluation takes into account the reduction of holes in the synthetic view as well as the presence of geometrical distortion caused by the smoothing operation. The selected filter has been included within an implemented module for the VideoLan Client (VLC) software in order to render 3D content from the 2D plus depth data format.
Resumo:
This paper presents the implementation of a robust grasp mapping between a 3-finger haptic device (master) and a robotic hand (slave). Mapping is based on a grasp equivalence defined considering the manipulation capabilities of the master and slave devices. The metrics that translate the human hand gesture to the robotic hand workspace are obtained through an analytical user study. This allows a natural control of the robotic hand. The grasp mapping is accomplished defining 4 control modes that encapsulate all the grasps gestures considered.