50 resultados para knowledge based development


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Presenting relevant information via web-based user friendly interfac- es makes the information more accessible to the general public. This is especial- ly useful for sensor networks that monitor natural environments. Adequately communicating this type of information helps increase awareness about the limited availability of natural resources and promotes their better use with sus- tainable practices. In this paper, I suggest an approach to communicating this information to wide audiences based on simulating data journalism using artifi- cial intelligence techniques. I analyze this approach by describing a pioneer knowledge-based system called VSAIH, which looks for news in hydrological data from a national sensor network in Spain and creates news stories that gen- eral users can understand. VSAIH integrates artificial intelligence techniques, including a model-based data analyzer and a presentation planner. In the paper, I also describe characteristics of the hydrological national sensor network and the technical solutions applied by VSAIH to simulate data journalism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the multi-agent organization of a computer system that was designed to assist operators in decision making in the presence of emergencies. The application was developed for the case of emergencies caused by river floods. It operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.) and applies multi-agent techniques to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation with uncertainty models (bayesian networks). This system has been applied and validated at two particular sites in Spain (the Jucar basin and the South basin).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finally, the paper illustrates the description with an example of an application in the field of photography equipment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a previous paper, we proposed an axiomatic model for measuring self-contradiction in the framework of Atanassov fuzzy sets. This way, contradiction measures that are semicontinuous and completely semicontinuous, from both below and above, were defined. Although some examples were given, the problem of finding families of functions satisfying the different axioms remained open. The purpose of this paper is to construct some families of contradiction measures firstly using continuous t-norms and t-conorms, and secondly by means of strong negations. In both cases, we study the properties that they satisfy. These families are then classified according the different kinds of measures presented in the above paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a solution to an NP-complete problem, namely the "3-colorability problem", based on a network of polarized processors. Our solution is uniform and time efficient.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modeling approach and the notion of problem-solving method. The article presents first a domain-independent model for real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modeling tool, called KSM (Knowledge Structure Manager), was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to describe an intelligent system for the problem of real time road traffic control. The purpose of the system is to help traffic engineers in the selection of the state of traffic control devices on real time, using data recorded by traffic detectors on motorways. The system follows an advanced knowledge-based approach that implements an abstract generic problem solving method, called propose-and-revise, which was proposed in Artificial Intelligence, within the knowledge engineering field, as a standard cognitive structure oriented to solve configuration design problems. The paper presents the knowledge model of such a system together with the strategy of inference and describes how it was applied for the case of the M-40 urban ring for the city of Madrid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Effective data summarization methods that use AI techniques can help humans understand large sets of data. In this paper, we describe a knowledge-based method for automatically generating summaries of geospatial and temporal data, i.e. data with geographical and temporal references. The method is useful for summarizing data streams, such as GPS traces and traffic information, that are becoming more prevalent with the increasing use of sensors in computing devices. The method presented here is an initial architecture for our ongoing research in this domain. In this paper we describe the data representations we have designed for our method, our implementations of components to perform data abstraction and natural language generation. We also discuss evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Antecedentes: Esta investigación se enmarca principalmente en la replicación y secundariamente en la síntesis de experimentos en Ingeniería de Software (IS). Para poder replicar, es necesario disponer de todos los detalles del experimento original. Sin embargo, la descripción de los experimentos es habitualmente incompleta debido a la existencia de conocimiento tácito y a la existencia de otros problemas tales como: La carencia de un formato estándar de reporte, la inexistencia de herramientas que den soporte a la generación de reportes experimentales, etc. Esto provoca que no se pueda reproducir fielmente el experimento original. Esta problemática limita considerablemente la capacidad de los experimentadores para llevar a cabo replicaciones y por ende síntesis de experimentos. Objetivo: La investigación tiene como objetivo formalizar el proceso experimental en IS, de modo que facilite la comunicación de información entre experimentadores. Contexto: El presente trabajo de tesis doctoral ha sido desarrollado en el seno del Grupo de Investigación en Ingeniería del Software Empírica (GrISE) perteneciente a la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM), como parte del proyecto TIN2011-23216 denominado “Tecnologías para la Replicación y Síntesis de Experimentos en Ingeniería de Software”, el cual es financiado por el Gobierno de España. El grupo GrISE cumple a la perfección con los requisitos necesarios (familia de experimentos establecida, con al menos tres líneas experimentales y una amplia experiencia en replicaciones (16 replicaciones hasta 2011 en la línea de técnicas de pruebas de software)) y ofrece las condiciones para que la investigación se lleve a cabo de la mejor manera, como por ejemplo, el acceso total a su información. Método de Investigación: Para cumplir este objetivo se opta por Action Research (AR) como el método de investigación más adecuado a las características de la investigación, para obtener resultados a través de aproximaciones sucesivas que abordan los problemas concretos de comunicación entre experimentadores. Resultados: Se formalizó el modelo conceptual del ciclo experimental desde la perspectiva de los 3 roles principales que representan los experimentadores en el proceso experimental, siendo estos: Gestor de la Investigación (GI), Gestor del Experimento (GE) y Experimentador Senior (ES). Por otra parte, se formalizó el modelo del ciclo experimental, a través de: Un workflow del ciclo y un diagrama de procesos. Paralelamente a la formalización del proceso experimental en IS, se desarrolló ISRE (de las siglas en inglés Infrastructure for Sharing and Replicating Experiments), una prueba de concepto de entorno de soporte a la experimentación en IS. Finalmente, se plantearon guías para el desarrollo de entornos de soporte a la experimentación en IS, en base al estudio de las características principales y comunes de los modelos de las herramientas de soporte a la experimentación en distintas disciplinas experimentales. Conclusiones: La principal contribución de la investigación esta representada por la formalización del proceso experimental en IS. Los modelos que representan la formalización del ciclo experimental, así como la herramienta ISRE, construida a modo de evaluación de los modelos, fueron encontrados satisfactorios por los experimentadores del GrISE. Para consolidar la validez de la formalización, consideramos que este estudio debería ser replicado en otros grupos de investigación representativos en la comunidad de la IS experimental. Futuras Líneas de Investigación: El cumplimiento de los objetivos, de la mano con los hallazgos alcanzados, han dado paso a nuevas líneas de investigación, las cuales son las siguientes: (1) Considerar la construcción de un mecanismo para facilitar el proceso de hacer explícito el conocimiento tácito de los experimentadores por si mismos de forma colaborativa y basados en el debate y el consenso , (2) Continuar la investigación empírica en el mismo grupo de investigación hasta cubrir completamente el ciclo experimental (por ejemplo: experimentos nuevos, síntesis de resultados, etc.), (3) Replicar el proceso de investigación en otros grupos de investigación en ISE, y (4) Renovar la tecnología de la prueba de concepto, tal que responda a las restricciones y necesidades de un entorno real de investigación. ABSTRACT Background: This research addresses first and foremost the replication and also the synthesis of software engineering (SE) experiments. Replication is impossible without access to all the details of the original experiment. But the description of experiments is usually incomplete because knowledge is tacit, there is no standard reporting format or there are hardly any tools to support the generation of experimental reports, etc. This means that the original experiment cannot be reproduced exactly. These issues place considerable constraints on experimenters’ options for carrying out replications and ultimately synthesizing experiments. Aim: The aim of the research is to formalize the SE experimental process in order to facilitate information communication among experimenters. Context: This PhD research was developed within the empirical software engineering research group (GrISE) at the Universidad Politécnica de Madrid (UPM)’s School of Computer Engineering (ETSIINF) as part of project TIN2011-23216 entitled “Technologies for Software Engineering Experiment Replication and Synthesis”, which was funded by the Spanish Government. The GrISE research group fulfils all the requirements (established family of experiments with at least three experimental lines and lengthy replication experience (16 replications prior to 2011 in the software testing techniques line)) and provides favourable conditions for the research to be conducted in the best possible way, like, for example, full access to information. Research Method: We opted for action research (AR) as the research method best suited to the characteristics of the investigation. Results were generated successive rounds of AR addressing specific communication problems among experimenters. Results: The conceptual model of the experimental cycle was formalized from the viewpoint of three key roles representing experimenters in the experimental process. They were: research manager, experiment manager and senior experimenter. The model of the experimental cycle was formalized by means of a workflow and a process diagram. In tandem with the formalization of the SE experimental process, infrastructure for sharing and replicating experiments (ISRE) was developed. ISRE is a proof of concept of a SE experimentation support environment. Finally, guidelines for developing SE experimentation support environments were designed based on the study of the key features that the models of experimentation support tools for different experimental disciplines had in common. Conclusions: The key contribution of this research is the formalization of the SE experimental process. GrISE experimenters were satisfied with both the models representing the formalization of the experimental cycle and the ISRE tool built in order to evaluate the models. In order to further validate the formalization, this study should be replicated at other research groups representative of the experimental SE community. Future Research Lines: The achievement of the aims and the resulting findings have led to new research lines, which are as follows: (1) assess the feasibility of building a mechanism to help experimenters collaboratively specify tacit knowledge based on debate and consensus, (2) continue empirical research at the same research group in order to cover the remainder of the experimental cycle (for example, new experiments, results synthesis, etc.), (3) replicate the research process at other ESE research groups, and (4) update the tools of the proof of concept in order to meet the constraints and needs of a real research environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we axiomatically introduce fuzzy multi-measures on bounded lattices. In particular, we make a distinction between four different types of fuzzy set multi-measures on a universe X, considering both the usual or inverse real number ordering of this lattice and increasing or decreasing monotonicity with respect to the number of arguments. We provide results from which we can derive families of measures that hold for the applicable conditions in each case.