876 resultados para Artificial Intelligence, Constraint Programming, set variables, representation
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
Most Artificial Intelligence (AI) work can be characterized as either ``high-level'' (e.g., logical, symbolic) or ``low-level'' (e.g., connectionist networks, behavior-based robotics). Each approach suffers from particular drawbacks. High-level AI uses abstractions that often have no relation to the way real, biological brains work. Low-level AI, on the other hand, tends to lack the powerful abstractions that are needed to express complex structures and relationships. I have tried to combine the best features of both approaches, by building a set of programming abstractions defined in terms of simple, biologically plausible components. At the ``ground level'', I define a primitive, perceptron-like computational unit. I then show how more abstract computational units may be implemented in terms of the primitive units, and show the utility of the abstract units in sample networks. The new units make it possible to build networks using concepts such as long-term memories, short-term memories, and frames. As a demonstration of these abstractions, I have implemented a simulator for ``creatures'' controlled by a network of abstract units. The creatures exist in a simple 2D world, and exhibit behaviors such as catching mobile prey and sorting colored blocks into matching boxes. This program demonstrates that it is possible to build systems that can interact effectively with a dynamic physical environment, yet use symbolic representations to control aspects of their behavior.
Resumo:
Norms constitute a powerful coordination mechanism among heterogeneous agents. In this paper, we propose a rule language to specify and explicitly manage the normative positions of agents (permissions, prohibitions and obligations), with which distinct deontic notions and their relationships can be captured. Our rule-based formalism includes constraints for more expressiveness and precision and allows to supplement (and implement) electronic institutions with norms. We also show how some normative aspects are given computational interpretation. © 2008 Springer Science+Business Media, LLC.
Resumo:
Dyscalculia stands for a brain-based condition that makes it hard to make sense of numbers and mathematical concepts. Some adolescents with dyscalculia cannot grasp basic number concepts. They work hard to learn and memorize basic number facts. They may know what to do in mathematical classes but do not understand why they are doing it. In other words, they miss the logic behind it. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work focuses on the development of an Intelligent System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming framework to Knowledge Representation and Reasoning, complemented with a Case-Based problem solving approach to computing, that allows for the handling of incomplete, unknown, or even contradictory information.
Resumo:
A link between patterns of pelvic growth and human life history is supported by the finding that, cross-culturally, variation in maturation rates of female pelvis are correlated with variation in ages of menarche and first reproduction, i.e., it is well known that the human dimensions of the pelvic bones depend on the gender and vary with the age. Indeed, one feature in which humans appear to be unique is the prolonged growth of the pelvis after the age of sexual maturity. Both the total superoinferior length and mediolateral breadth of the pelvis continues to grow markedly after puberty, and do not reach adult proportions until the late teens years. This continuation of growth is accomplished by relatively late fusion of the separate centers of ossification that form the bones of the pelvis. Hence, in this work we will focus on the development of an intelligent decision support system to predict individual’s age based on a pelvis' dimensions criteria. Some basic image processing techniques were applied in order to extract the relevant features from pelvic X-rays, being the computational framework built on top of a Logic Programming approach to Knowledge Representation and Reasoning that caters for the handling of incomplete, unknown, or even self-contradictory information, complemented with a Case Base approach to computing.
Resumo:
Dyscalculia is usually perceived of as a specific learning difficulty for mathematics or, more appropriately, arithmetic. Because definitions and diagnoses of dyscalculia are in their infancy and sometimes are contradictory. However, mathematical learning difficulties are certainly not in their infancy and are very prevalent and often devastating in their impact. Co-occurrence of learning disorders appears to be the rule rather than the exception. Co-occurrence is generally assumed to be a consequence of risk factors that are shared between disorders, for example, working memory. However, it should not be assumed that all dyslexics have problems with mathematics, although the percentage may be very high, or that all dyscalculics have problems with reading and writing. Because mathematics is very developmental, any insecurity or uncertainty in early topics will impact on later topics, hence to need to take intervention back to basics. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work will focus on the development of a Decision Support System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming approach to Knowledge Representation and Reasoning, grounded on a Case-based approach to computing, that allows for the handling of incomplete, unknown, or even self-contradictory information.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
This report concentrates on progress during the last two years at the M.I.T. Artificial Intelligence Laboratory. Topics covered include the representation of knowledge, understanding English, learning and debugging, understanding vision and productivity technology. It is stressed that these various areas are tied closely together through certain fundamental issues and problems.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
This report outlines the problem of intelligent failure recovery in a problem-solver for electrical design. We want our problem solver to learn as much as it can from its mistakes. Thus we cast the engineering design process on terms of Problem Solving by Debugging Almost-Right Plans, a paradigm for automatic problem solving based on the belief that creation and removal of "bugs" is an unavoidable part of the process of solving a complex problem. The process of localization and removal of bugs called for by the PSBDARP theory requires an approach to engineering analysis in which every result has a justification which describes the exact set of assumptions it depends upon. We have developed a program based on Analysis by Propagation of Constraints which can explain the basis of its deductions. In addition to being useful to a PSBDARP designer, these justifications are used in Dependency-Directed Backtracking to limit the combinatorial search in the analysis routines. Although the research we will describe is explicitly about electrical circuits, we believe that similar principles and methods are employed by other kinds of engineers, including computer programmers.
Resumo:
This paper shows a comparative study between the Artificial Intelligence Problem Solving and the Human Problem Solving. The study is based on the solution by many ways of problems proposed via multiple-choice questions. General techniques used by humans to solve this kind of problems are grouped in blocks and each block is divided in steps. A new architecture for ITS - Intelligent Tutoring System is proposed to support experts' knowledge representation and novices' activities. Problems are represented by a text and feasible answers with particular meaning and form, to be rigorously analyzed by the solver to find the right one. Paths through a conceptual space of states represent each right solution.
Resumo:
In this paper, we introduce a DAI approach called hereinafter Fuzzy Distributed Artificial Intelligence (FDAI). Through the use of fuzzy logic, we have been able to develop mechanisms that we feel may effectively improve current DAI systems, giving much more flexibility and providing the subsidies which a formal theory can bring. The appropriateness of the FDAI approach is explored in an important application, a fuzzy distributed traffic-light control system, where we have been able to aggregate and study several issues concerned with fuzzy and distributed artificial intelligence. We also present a number of current research directions necessary to develop the FDAI approach more fully.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.