890 resultados para knowledge-based systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions, which are one of the basic tools in knowledge-based systems. The functions known as means (or averages) are idempotent and typically are monotone, however there are many important classes of means that are non-monotone. Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging functions. In this paper we discuss the concepts of directional and cone monotonicity, and monotonicity with respect to majority of inputs and coalitions of inputs. We establish the relations between various kinds of monotonicity, and illustrate it on various examples. We also provide a construction method for cone monotone functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Engenharia de Conhecimento (Knowledge Engineering - KE) atual considera o desenvolvimento de Sistemas Baseados em Conhecimento (Knowledge- Based Systems - KBSs) como um processo de modelagem baseado em modelos de conhecimento reusáveis. A noção de Métodos de Solução de Problemas (Problem- Solving Methods - PSMs) desempenha um importante papel neste cenário de pesquisa, pois representa o conhecimento inferencial de KBSs em um formalismo explícito. Não menos importante, PSMs também facilitam a compreensão do processo de raciocínio desenvolvido por humanos. PSMs são descritos em um formalismo abstrato e independente de implementação, facilitando a análise do conhecimento inferencial que muitas vezes é obscurecido em grandes bases de conhecimento. Desta forma, este trabalho discute a noção de PSMs, avaliando os problemas de pesquisa envolvidos no processo de desenvolvimento e especificação de um método, como também analisando as possibilidades de aplicação de PSMs. O trabalho apresenta a descrição e análise de um estudo de caso sobre o processo de desenvolvimento, especificação e aplicação de um PSM Interpretação de Rochas. As tarefas de interpretação de rochas são desenvolvidas por petrógrafos especialistas e correspondem a um importante passo na caracterização de rochasreservatório de petróleo e definição de técnicas de exploração, permitindo que companhias de petróleo reduzam custos de exploração normalmente muito elevados. Para suportar o desenvolvimento de KBSs neste domínio de aplicação, foram desenvolvidos dois PSMs novos: o PSM Interpretação de Rochas e o PSM Interpretação de Ambientes Diagenéticos. Tais métodos foram especificados a partir de uma análise da perícia em Petrografia Sedimentar, como também a partir de modelos de conhecimento e dados desenvolvidos durante o projeto PetroGrapher. O PSM Interpretação de Rochas e o PSM Interpretação de Ambientes Diagenéticos são especificados conceitualmente em termos de competência, especificação operacional e requisitos/suposições. Tais definições detalham os componentes centrais de um esquema de raciocínio para interpretação de rochas. Este esquema é empregado como um modelo de compreensão e análise do processo de raciocínio requerido para orientar o desenvolvimento de uma arquitetura de raciocínio para interpretação de rochas. Esta arquitetura é descrita em termos de requisitos de armazenamento e manipulação de dados e conhecimento, permitindo projetar e construir um algoritmo de inferência simbólico para uma aplicação de bancos de dados inteligentes denominada PetroGrapher.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of computing power of the microcomputers has stimulated the building of direct manipulation interfaces that allow graphical representation of Linear Programming (LP) models. This work discusses the components of such a graphical interface as the basis for a system to assist users in the process of formulating LP problems. In essence, this work proposes a methodology which considers the modelling task as divided into three stages which are specification of the Data Model, the Conceptual Model and the LP Model. The necessity for using Artificial Intelligence techniques in the problem conceptualisation and to help the model formulation task is illustrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overview is given on the possibility of controlling the status of circuit breakers (CB) in a substations with the use of a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets. It is shown that even when all the magnitudes to be controlled cannot be included in the analysis, it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work begins with a review of the literature on bit selection methods for oil well drilling. A proposal for the structure and organization of a drilling database and a knowledge base, is described. Previous studies formed the principal elements in the process of selection of drills for proposed drilling. The procedure was implemented as a computer system for the selection of tricone bits. A drilling bit database for three different Brazilian sedimentary basins was obtained for several wells drilled, and knowledge was collected from drilling engineers from different fields both electronically and also by means of interviews. It can be concluded that the selection process showed good results based on tests, which were carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, an expert and interactive system for developing protection system for overhead and radial distribution feeders is proposed. In this system the protective devices can be allocated through heuristic and an optimized way. In the latter one, the placement problem is modeled as a mixed integer non-linear programming, which is solved by genetic algorithm (GA). Using information stored in a database as well as a knowledge base, the computational system is able to obtain excellent conditions of selectivity and coordination for improving the feeder reliability indices. Tests for assessment of the algorithm efficiency were carried out using a real-life 660-nodes feeder. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for the representation of both semantics and common sense and its organization in a lexical database or knowledge base has motivated the development of large projects, such as Wordnets, CYC and Mikrokosmos. Besides the generic bases, another approach is the construction of ontologies for specific domains. Among the advantages of such approach there is the possibility of a greater and more detailed coverage of a specific domain and its terminology. Domain ontologies are important resources in several tasks related to the language processing, especially in those related to information retrieval and extraction in textual bases. Information retrieval or even question and answer systems can benefit from the domain knowledge represented in an ontology. Besides embracing the terminology of the field, the ontology makes the relationships among the terms explicit. Copyright 2007 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A software prototype for dynamic route planning in the travel industry for cognitive cities is presented in this paper. In contrast to existing tools, the prototype enhances the travel experience (i.e., sightseeing) by allowing additional flexibility to the user. The theoretical background of the paper strengthens the understanding of the introduced concepts (e.g., cognitive cities, fuzzy logic, graph databases) to comprehend the presented prototype. The prototype applies an instantiation and enhancement of the graph database Neo4j . For didactical reasons and to strengthen the understanding of this prototype a scenario, applied to route planning in the city of Bern (Switzerland) is shown in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a previous paper, we proposed an axiomatic model for measuring self-contradiction in the framework of Atanassov fuzzy sets. This way, contradiction measures that are semicontinuous and completely semicontinuous, from both below and above, were defined. Although some examples were given, the problem of finding families of functions satisfying the different axioms remained open. The purpose of this paper is to construct some families of contradiction measures firstly using continuous t-norms and t-conorms, and secondly by means of strong negations. In both cases, we study the properties that they satisfy. These families are then classified according the different kinds of measures presented in the above paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.