98 resultados para Artificial intelligence -- Computer programs
Resumo:
Recently, several belief negotiation models have been introduced to deal with the problem of belief merging. A negotiation model usually consists of two functions: a negotiation function and a weakening function. A negotiation function is defined to choose the weakest sources and these sources will weaken their point of view using a weakening function. However, the currently available belief negotiation models are based on classical logic, which makes them difficult to define weakening functions. In this paper, we define a prioritized belief negotiation model in the framework of possibilistic logic. The priority between formulae provides us with important information to decide which beliefs should be discarded. The problem of merging uncertain information from different sources is then solved by two steps. First, beliefs in the original knowledge bases will be weakened to resolve inconsistencies among them. This step is based on a prioritized belief negotiation model. Second, the knowledge bases obtained by the first step are combined using a conjunctive operator which may have a reinforcement effect in possibilistic logic.
Resumo:
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.
Resumo:
This paper introduces a recursive rule base adjustment to enhance the performance of fuzzy logic controllers. Here the fuzzy controller is constructed on the basis of a decision table (DT), relying on membership functions and fuzzy rules that incorporate heuristic knowledge and operator experience. If the controller performance is not satisfactory, it has previously been suggested that the rule base be altered by combined tuning of membership functions and controller scaling factors. The alternative approach proposed here entails alteration of the fuzzy rule base. The recursive rule base adjustment algorithm proposed in this paper has the benefit that it is computationally more efficient for the generation of a DT, and advantage for online realization. Simulation results are presented to support this thesis. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A novel methodology is proposed for the development of neural network models for complex engineering systems exhibiting nonlinearity. This method performs neural network modeling by first establishing some fundamental nonlinear functions from a priori engineering knowledge, which are then constructed and coded into appropriate chromosome representations. Given a suitable fitness function, using evolutionary approaches such as genetic algorithms, a population of chromosomes evolves for a certain number of generations to finally produce a neural network model best fitting the system data. The objective is to improve the transparency of the neural networks, i.e. to produce physically meaningful
Resumo:
Latent semantic indexing (LSI) is a popular technique used in information retrieval (IR) applications. This paper presents a novel evaluation strategy based on the use of image processing tools. The authors evaluate the use of the discrete cosine transform (DCT) and Cohen Daubechies Feauveau 9/7 (CDF 9/7) wavelet transform as a pre-processing step for the singular value decomposition (SVD) step of the LSI system. In addition, the effect of different threshold types on the search results is examined. The results show that accuracy can be increased by applying both transforms as a pre-processing step, with better performance for the hard-threshold function. The choice of the best threshold value is a key factor in the transform process. This paper also describes the most effective structure for the database to facilitate efficient searching in the LSI system.
Resumo:
Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.
Resumo:
The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
The United States Supreme Court case of 1991, Feist Publications, Inc. v. Rural Tel. Service Co., continues to be highly significant for property in data and databases, but remains poorly understood. The approach taken in this article contrasts with previous studies. It focuses upon the “not original” rather than the original. The delineation of the absence of a modicum of creativity in selection, coordination, and arrangement of data as a component of the not original forms a pivotal point in the Supreme Court decision. The author also aims at elucidation rather than critique, using close textual exegesis of the Supreme Court decision. The results of the exegesis are translated into a more formal logical form to enhance clarity and rigor.
The insufficiently creative is initially characterized as “so mechanical or routine.” Mechanical and routine are understood in their ordinary discourse senses, as a conjunction or as connected by AND, and as the central clause. Subsequent clauses amplify the senses of mechanical and routine without disturbing their conjunction.
The delineation of the absence of a modicum of creativity can be correlated with classic conceptions of computability. The insufficiently creative can then be understood as a routine selection, coordination, or arrangement produced by an automatic mechanical procedure or algorithm. An understanding of a modicum of creativity and of copyright law is also indicated.
The value of the exegesis and interpretation is identified as its final simplicity, clarity, comprehensiveness, and potential practical utility.
Resumo:
The decision of the U.S. Supreme Court in 1991 in Feist Publications, Inc. v. Rural Tel. Service Co. affirmed originality as a constitutional requirement for copyright. Originality has a specific sense and is constituted by a minimal degree of creativity and independent creation. The not original is the more developed concept within the decision. It includes the absence of a minimal degree of creativity as a major constituent. Different levels of absence of creativity also are distinguished, from the extreme absence of creativity to insufficient creativity. There is a gestalt effect of analogy between the delineation of the not original and the concept of computability. More specific correlations can be found within the extreme absence of creativity. "[S]o mechanical" in the decision can be correlated with an automatic mechanical procedure and clauses with a historical resonance with understandings of computability as what would naturally be regarded as computable. The routine within the extreme absence of creativity can be regarded as the product of a computational process. The concern of this article is with rigorously establishing an understanding of the extreme absence of creativity, primarily through the correlations with aspects of computability. The understanding established is consistent with the other elements of the not original. It also revealed as testable under real-world conditions. The possibilities for understanding insufficient creativity, a minimal degree of creativity, and originality, from the understanding developed of the extreme absence of creativity, are indicated.