32 resultados para 280213 Other Artificial Intelligence
Resumo:
In this paper we investigate the relationship between two prioritized knowledge bases by measuring both the conflict and the agreement between them.First of all, a quantity of conflict and two quantities of agreement are defined. The former is shown to be a generalization of the well-known Dalal distance which is the hamming distance between two interpretations. The latter are, respectively, a quantity of strong agreement which measures the amount ofinformation on which two belief bases “totally” agree, and a quantity of weak agreement which measures the amount of information that is believed by onesource but is unknown to the other. All three quantity measures are based on the weighted prime implicant, which represents beliefs in a prioritized belief base. We then define a degree of conflict and two degrees of agreement based on our quantity of conflict and quantities of agreement. We also consider the impact of these measures on belief merging and information source ordering.
Resumo:
The success postulate in belief revision ensures that new evidence (input) is always trusted. However, admitting uncertain input has been questioned by many researchers. Darwiche and Pearl argued that strengths of evidence should be introduced to determine the outcome of belief change, and provided a preliminary definition towards this thought. In this paper, we start with Darwiche and Pearl’s idea aiming to develop a framework that can capture the influence of the strengths of inputs with some rational assumptions. To achieve this, we first define epistemic states to represent beliefs attached with strength, and then present a set of postulates to describe the change process on epistemic states that is determined by the strengths of input and establish representation theorems to characterize these postulates. As a result, we obtain a unique rewarding operator which is proved to be a merging operator that is in line with many other works. We also investigate existing postulates on belief merging and compare them with our postulates. In addition, we show that from an epistemic state, a corresponding ordinal conditional function by Spohn can be derived and the result of combining two epistemic states is thus reduced to the result of combining two corresponding ordinal conditional functions proposed by Laverny and Lang. Furthermore, when reduced to the belief revision situation, we prove that our results induce all the Darwiche and Pearl’s postulates as well as the Recalcitrance postulate and the Independence postulate.
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.
Resumo:
As a class of defects in software requirements specification, inconsistency has been widely studied in both requirements engineering and software engineering. It has been increasingly recognized that maintaining consistency alone often results in some other types of non-canonical requirements, including incompleteness of a requirements specification, vague requirements statements, and redundant requirements statements. It is therefore desirable for inconsistency handling to take into account the related non-canonical requirements in requirements engineering. To address this issue, we propose an intuitive generalization of logical techniques for handling inconsistency to those that are suitable for managing non-canonical requirements, which deals with incompleteness and redundancy, in addition to inconsistency. We first argue that measuring non-canonical requirements plays a crucial role in handling them effectively. We then present a measure-driven logic framework for managing non-canonical requirements. The framework consists of five main parts, identifying non-canonical requirements, measuring them, generating candidate proposals for handling them, choosing commonly acceptable proposals, and revising them according to the chosen proposals. This generalization can be considered as an attempt to handle non-canonical requirements along with logic-based inconsistency handling in requirements engineering.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
The concept of exospace, as an alternative liveable structure, is discussed in this article to improve our comprehension of architectural space. Exospace is a man-made space designed for living beyond Earth’s atmosphere. Humankind has developed outerspace technologies to build the International Space Station as a significant experiment in exospace design. The ISS is a new building type for scientific experiments and for testing human existence in outerspace.
A fictional example of exospace, on the other hand, is Discovery 1 spaceship in Stanley Kubrick’s legendary science fiction film 2001: A Space Odyssey (1968). It is a ship travelling to Jupiter with a crew of five astronauts and HAL9000, the artificial intelligence controlling the ship. I will first discuss the ISS, and the space stations built before, from a spatial point of view. A spatial study of Discovery 1 will follow. Finally, through an understanding of exospace, I will return to architectural space with a critical appraisal. The comparison of architectural space with exospace will add to the discussion of space theories from a technological approach.
Exospace creates an alternative reality to architectural space. Architects cannot consider exospaces without comparing them with the spaces they design on Earth. The different context of outerspace shows that a work of terrestrial architecture is very much dependent on its context. A building is not an ‘object’ that can be located anywhere; it is designed for its site. Architectural space is a real, material, continuous, static and extroverted habitable space designed for and used in the specific physical context of Earth. The existence of exospace in science opens a new discussion in architectural theory, both terrestrial and extraterrestrial.
Resumo:
Making a decision is often a matter of listing and comparing positive and negative arguments. In such cases, the evaluation scale for decisions should be considered bipolar, that is, negative and positive values should be explicitly distinguished. That is what is done, for example, in Cumulative Prospect Theory. However, contrary to the latter framework that presupposes genuine numerical assessments, human agents often decide on the basis of an ordinal ranking of the pros and the cons, and by focusing on the most salient arguments. In other terms, the decision process is qualitative as well as bipolar. In this article, based on a bipolar extension of possibility theory, we define and axiomatically characterize several decision rules tailored for the joint handling of positive and negative arguments in an ordinal setting. The simplest rules can be viewed as extensions of the maximin and maximax criteria to the bipolar case, and consequently suffer from poor decisive power. More decisive rules that refine the former are also proposed. These refinements agree both with principles of efficiency and with the spirit of order-of-magnitude reasoning, that prevails in qualitative decision theory. The most refined decision rule uses leximin rankings of the pros and the cons, and the ideas of counting arguments of equal strength and cancelling pros by cons. It is shown to come down to a special case of Cumulative Prospect Theory, and to subsume the “Take the Best” heuristic studied by cognitive psychologists.
Resumo:
In this paper we present a generalization of belief functions over fuzzy events. In particular we focus on belief functions defined in the algebraic framework of finite MV-algebras of fuzzy sets. We introduce a fuzzy modal logic to formalize reasoning with belief functions on many-valued events. We prove, among other results, that several different notions of belief functions can be characterized in a quite uniform way, just by slightly modifying the complete axiomatization of one of the modal logics involved in the definition of our formalism. © 2012 Elsevier Inc. All rights reserved.
Resumo:
We analyze ways by which people decompose into groups in distributed systems. We are interested in systems in which an agent can increase its utility by connecting to other agents, but must also pay a cost that increases with the size of the sys- tem. The right balance is achieved by the right size group of agents. We formulate and analyze three intuitive and realistic games and show how simple changes in the protocol can dras- tically improve the price of anarchy of these games. In partic- ular, we identify two important properties for a low price of anarchy: agreement in joining the system, and the possibil- ity of appealing a rejection from a system. We show that the latter property is especially important if there are some pre- existing constraints regarding who may collaborate (or com- municate) with whom.
Resumo:
The problem of learning from imbalanced data is of critical importance in a large number of application domains and can be a bottleneck in the performance of various conventional learning methods that assume the data distribution to be balanced. The class imbalance problem corresponds to dealing with the situation where one class massively outnumbers the other. The imbalance between majority and minority would lead machine learning to be biased and produce unreliable outcomes if the imbalanced data is used directly. There has been increasing interest in this research area and a number of algorithms have been developed. However, independent evaluation of the algorithms is limited. This paper aims at evaluating the performance of five representative data sampling methods namely SMOTE, ADASYN, BorderlineSMOTE, SMOTETomek and RUSBoost that deal with class imbalance problems. A comparative study is conducted and the performance of each method is critically analysed in terms of assessment metrics. © 2013 Springer-Verlag.
Resumo:
Depending on the representation setting, different combination rules have been proposed for fusing information from distinct sources. Moreover in each setting, different sets of axioms that combination rules should satisfy have been advocated, thus justifying the existence of alternative rules (usually motivated by situations where the behavior of other rules was found unsatisfactory). These sets of axioms are usually purely considered in their own settings, without in-depth analysis of common properties essential for all the settings. This paper introduces core properties that, once properly instantiated, are meaningful in different representation settings ranging from logic to imprecise probabilities. The following representation settings are especially considered: classical set representation, possibility theory, and evidence theory, the latter encompassing the two other ones as special cases. This unified discussion of combination rules across different settings is expected to provide a fresh look on some old but basic issues in information fusion.
Resumo:
Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.
Resumo:
The objective of this study is to provide an alternative model approach, i.e., artificial neural network (ANN) model, to predict the compositional viscosity of binary mixtures of room temperature ionic liquids (in short as ILs) [C n-mim] [NTf 2] with n=4, 6, 8, 10 in methanol and ethanol over the entire range of molar fraction at a broad range of temperatures from T=293.0328.0K. The results show that the proposed ANN model provides alternative way to predict compositional viscosity successfully with highly improved accuracy and also show its potential to be extensively utilized to predict compositional viscosity over a wide range of temperatures and more complex viscosity compositions, i.e., more complex intermolecular interactions between components in which it would be hard or impossible to establish the analytical model. © 2010 IEEE.