876 resultados para Artificial Intelligence, Constraint Programming, set variables, representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certifícate (or proof). The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both to prove programs correct and to replace a costly verification process by an efñcient checking procedure on the consumer side. In this work we propose Abstraction-Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argüe that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safety policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the área of mobile code safety. We have implemented and benchmarked ACC within the Ciao system preprocessor. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While negation has been a very active área of research in logic programming, comparatively few papers have been devoted to implementation issues. Furthermore, the negation-related capabilities of current Prolog systems are limited. We recently presented a novel method for incorporating negation in a Prolog compiler which takes a number of existing methods (some modified and improved by us) and uses them in a combined fashion. The method makes use of information provided by a global analysis of the source code. Our previous work focused on the systematic description of the techniques and the reasoning about correctness and completeness of the method, but provided no experimental evidence to evalúate the proposal. In this paper, we report on an implementation, using the Ciao Prolog system preprocessor, and provide experimental data which indicates that the method is not only feasible but also quite promising from the efficiency point of view. In addition, the tests have provided new insight as to how to improve the proposal further. Abstract interpretation techniques are shown to offer important improvements in this application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control part of the execution of a constraint logic program can be conceptually shown as a search-tree, where nodes correspond to calis, and whose branches represent conjunctions and disjunctions. This tree represents the search space traversed by the program, and has also a direct relationship with the amount of work performed by the program. The nodes of the tree can be used to display information regarding the state and origin of instantiation of the variables involved in each cali. This depiction can also be used for the enumeration process. These are the features implemented in APT, a tool which runs constraint logic programs while depicting a (modified) search-tree, keeping at the same time information about the state of the variables at every moment in the execution. This information can be used to replay the execution at will, both forwards and backwards in time. These views can be abstracted when the size of the execution requires it. The search-tree view is used as a framework onto which constraint-level visualizations (such as those presented in the following chapter) can be attached.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a generic preprocessor for combined static/dynamic validation and debugging of constraint logic programs. Passing programs through the preprocessor prior to execution allows detecting many bugs automatically. This is achieved by performing a repertoire of tests which range from simple syntactic checks to much more advanced checks based on static analysis of the program. Together with the program, the user may provide a series of assertions which trigger further automatic checking of the program. Such assertions are written using the assertion language presented in Chapter 2, which allows expressing a wide variety of properties. These properties extend beyond the predefined set which may be understandable by the available static analyzers and include properties defined by means of user programs. In addition to user-provided assertions, in each particular CLP system assertions may be available for predefined system predicates. Checking of both user-provided assertions and assertions for system predicates is attempted first at compile-time by comparing them with the results of static analysis. This may allow statically proving that the assertions hold (Le., they are validated) or that they are violated (and thus bugs detected). User-provided assertions (or parts of assertions) which cannot be statically proved ñor disproved are optionally translated into run-time tests. The implementation of the preprocessor is generic in that it can be easily customized to different CLP systems and dialects and in that it is designed to allow the integration of additional analyses in a simple way. We also report on two tools which are instances of the generic preprocessor: CiaoPP (for the Ciao Prolog system) and CHIPRE (for the CHIP CLP(FL>) system). The currently existing analyses include types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an advanced program development environment, such as that discussed in the introduction of this book, several tools may coexist which handle both the program and information on the program in different ways. Also, these tools may interact among themselves and with the user. Thus, the different tools and the user need some way to communicate. It is our design principie that such communication be performed in terms of assertions. Assertions are syntactic objects which allow expressing properties of programs. Several assertion languages have been used in the past in different contexts, mainly related to program debugging. In this chapter we propose a general language of assertions which is used in different tools for validation and debugging of constraint logic programs in the context of the DiSCiPl project. The assertion language proposed is parametric w.r.t. the particular constraint domain and properties of interest being used in each different tool. The language proposed is quite general in that it poses few restrictions on the kind of properties which may be expressed. We believe the assertion language we propose is of practical relevance and appropriate for the different uses required in the tools considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This introduction gives a general perspective of the debugging methodology and the tools developed in the ESPRIT IV project DiSCiPl Debugging Systems for Constraint Programming. It has been prepared by the editors of this volume by substantial rewriting of the DiSCiPl deliverable CP Debugging Tools [1]. This introduction is organised as follows. Section 1 outlines the DiSCiPl view of debugging, its associated debugging methodology, and motivates the kinds of tools proposed: the assertion based tools, the declarative diagnoser and the visualisation tools. Sections 2 through 4 provide a short presentation of the tools of each kind. Finally, Section 5 presents a summary of the tools developed in the project. This introduction gives only a general view of the DiSCiPl debugging methodology and tools. For details and for specific bibliographic referenees the reader is referred to the subsequent chapters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology-Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, there is a great deal of well-founded explicit knowledge formalizing general notions, such as time concepts and the part_of relation. Yet, it is often the case that instead of reusing ontologies that implement such notions (the so-called general ontologies), engineers create procedural programs that implicitly implement this knowledge. They do not save time and code by reusing explicit knowledge, and devote effort to solve problems that other people have already adequately solved. Consequently, we have developed a methodology that helps engineers to: (a) identify the type of general ontology to be reused; (b) find out which axioms and definitions should be reused; (c) make a decision, using formal concept analysis, on what general ontology is going to be reused; and (d) adapt and integrate the selected general ontology in the domain ontology to be developed. To illustrate our approach we have employed use-cases. For each use case, we provide a set of heuristics with examples. Each of these heuristics has been tested in either OWL or Prolog. Our methodology has been applied to develop a pharmaceutical product ontology. Additionally, we have carried out a controlled experiment with graduated students doing a MCs in Artificial Intelligence. This experiment has yielded some interesting findings concerning what kind of features the future extensions of the methodology should have.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a knowledge model for a configuration problem in the do-main of traffic control. The goal of this model is to help traffic engineers in the dynamic selection of a set of messages to be presented to drivers on variable message signals. This selection is done in a real-time context using data recorded by traffic detectors on motorways. The system follows an advanced knowledge-based solution that implements two abstract problem solving methods according to a model-based approach recently proposed in the knowledge engineering field. Finally, the paper presents a discussion about the advantages and drawbacks found for this problem as a consequence of the applied knowledge modeling ap-proach.