946 resultados para Generic APIs
Resumo:
The relationship between abstract interpretation [2] and partial evaluation [5] has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction (see e.g. [6] and its references) and abstract interpretation perspectives. Abstract interpretation-based analyzers (such as the CiaoPP analyzer [9,4]) generally compute a program analysis graph [1] in order to propagate (abstract) call and success information by performing fixpoint computations when needed. On the other hand, partial deduction methods [7] incorporate powerful techniques for on-line specialization including (concrete) call propagation and unfolding.
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (Le., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external datábase. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (i.e., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external database. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language.
Resumo:
Context-sensitive analysis provides information which is potentially more accurate than that provided by context-free analysis. Such information can then be applied in order to validate/debug the program and/or to specialize the program obtaining important improvements. Unfortunately, context-sensitive analysis of modular programs poses important theoretical and practical problems. One solution, used in several proposals, is to resort to context-free analysis. Other proposals do address context-sensitive analysis, but are only applicable when the description domain used satisfies rather restrictive properties. In this paper, we argüe that a general framework for context-sensitive analysis of modular programs, Le., one that allows using all the domains which have proved useful in practice in the non-modular setting, is indeed feasible and very useful. Driven by our experience in the design and implementation of analysis and specialization techniques in the context of CiaoPP, the Ciao system preprocessor, in this paper we discuss a number of design goals for context-sensitive analysis of modular programs as well as the problems which arise in trying to meet these goals. We also provide a high-level description of a framework for analysis of modular programs which does substantially meet these objectives. This framework is generic in that it can be instantiated in different ways in order to adapt to different contexts. Finally, the behavior of the different instantiations w.r.t. the design goals that motivate our work is also discussed.
Resumo:
We present a generic preprocessor for combined static/dynamic validation and debugging of constraint logic programs. Passing programs through the preprocessor prior to execution allows detecting many bugs automatically. This is achieved by performing a repertoire of tests which range from simple syntactic checks to much more advanced checks based on static analysis of the program. Together with the program, the user may provide a series of assertions which trigger further automatic checking of the program. Such assertions are written using the assertion language presented in Chapter 2, which allows expressing a wide variety of properties. These properties extend beyond the predefined set which may be understandable by the available static analyzers and include properties defined by means of user programs. In addition to user-provided assertions, in each particular CLP system assertions may be available for predefined system predicates. Checking of both user-provided assertions and assertions for system predicates is attempted first at compile-time by comparing them with the results of static analysis. This may allow statically proving that the assertions hold (Le., they are validated) or that they are violated (and thus bugs detected). User-provided assertions (or parts of assertions) which cannot be statically proved ñor disproved are optionally translated into run-time tests. The implementation of the preprocessor is generic in that it can be easily customized to different CLP systems and dialects and in that it is designed to allow the integration of additional analyses in a simple way. We also report on two tools which are instances of the generic preprocessor: CiaoPP (for the Ciao Prolog system) and CHIPRE (for the CHIP CLP(FL>) system). The currently existing analyses include types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.
Resumo:
The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.
Resumo:
Ciao Prolog incorporates a module system which allows sepárate compilation and sensible creation of standalone executables. We describe some of the main aspects of the Ciao modular compiler, ciaoc, which takes advantage of the characteristics of the Ciao Prolog module system to automatically perform sepárate and incremental compilation and efficiently build small, standalone executables with competitive run-time performance, ciaoc can also detect statically a larger number of programming errors. We also present a generic code processing library for handling modular programs, which provides an important part of the functionality of ciaoc. This library allows the development of program analysis and transformation tools in a way that is to some extent orthogonal to the details of module system design, and has been used in the implementation of ciaoc and other Ciao system tools. We also describe the different types of executables which can be generated by the Ciao compiler, which offer different tradeoffs between executable size, startup time, and portability, depending, among other factors, on the linking regime used (static, dynamic, lazy, etc.). Finally, we provide experimental data which illustrate these tradeoffs.
Resumo:
Information generated by abstract interpreters has long been used to perform program specialization. Additionally, if the abstract interpreter generates a multivariant analysis, it is also possible to perform múltiple specialization. Information about valúes of variables is propagated by simulating program execution and performing fixpoint computations for recursive calis. In contrast, traditional partial evaluators (mainly) use unfolding for both propagating valúes of variables and transforming the program. It is known that abstract interpretation is a better technique for propagating success valúes than unfolding. However, the program transformations induced by unfolding may lead to important optimizations which are not directly achievable in the existing frameworks for múltiple specialization based on abstract interpretation. The aim of this work is to devise a specialization framework which integrates the better information propagation of abstract interpretation with the powerful program transformations performed by partial evaluation, and which can be implemented via small modifications to existing generic abstract interpreters. With this aim, we will relate top-down abstract interpretation with traditional concepts in partial evaluation and sketch how the sophisticated techniques developed for controlling partial evaluation can be adapted to the proposed specialization framework. We conclude that there can be both practical and conceptual advantages in the proposed integration of partial evaluation and abstract interpretation.
Resumo:
Abstract is not available
Resumo:
We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.
Resumo:
In this genre analysis research paper, we compare U.S. patents, contracts, and regulations on technical matters with a focus upon the relation between vagueness and communicative purposes and subpurposes of these three genres. Our main interest is the investigation of intergeneric conventions across the three genres, based on the software analysis of three corpora (one for each genre, 1 million words per corpus). The result of the investigation is that intergeneric conventions are found at the level of types of expressed linguistic vagueness, but that intergeneric conventions at the level of actual formulations are rare. The conclusion is that at this latter level the influence from the situation type underlying the individual genre is more important than the overarching legal character of the genres, when we talk about introducing explicit vagueness in the text.
Resumo:
Current bias estimation algorithms for air traffic control (ATC) surveillance are focused on radar sensors, but the integration of new sensors (especially automatic dependent surveillance-broadcast and wide area multilateration) demands the extension of traditional procedures. This study describes a generic architecture for bias estimation applicable to multisensor multitarget surveillance systems. It consists on first performing bias estimations using measurements from each target, of a subset of sensors, assumed to be reliable, forming track bias estimations. All track bias estimations are combined to obtain, for each of those sensors, the corresponding sensor bias. Then, sensor bias terms are corrected, to subsequently calculate the target or sensor-target pair specific biases. Once these target-specific biases are corrected, the process is repeated recursively for other sets of less reliable sensors, assuming bias corrected measures from previous iterations are unbiased. This study describes the architecture and outlines the methodology for the estimation and the bias estimation design processes. Then the approach is validated through simulation, and compared with previous methods in the literature. Finally, the study describes the application of the methodology to the design of the bias estimation procedures for a modern ATC surveillance application, specifically for off-line assessment of ATC surveillance performance.
Resumo:
The competency assessment is a key issue for improving the quality of teaching and learning within the current European Higher Education Area (EHEA). The aim of this paper is to review the current research on assessment of generic competences, especially through online tools. It has conducted a search of the Web of Knowledge (Thomson Reuters) from keywords. It have been reviewed the abstracts and the results have been classified by time periods, countries and research area. It has selected a set of articles of the period 2010?2012, in which we have analyzed future trends. It is concluded that the research of assessment generic competences is been developing nowadays in educational area, although is still more important in the professional one. Additionally, it is surprising that appears most often used in preuniversity education levels. The academic context has increased research activities over the past five years, with different developments in the Anglo?Saxon countries over that those countries attached to the Bologna Process. The latest reports indicate that the learning competences must meet the individual reality of each person, so that the use of ICTs in their development and evaluation are essential given its immediacy and motivational ability. There is a clear trend towards an evaluation model that includes a 360º specific and generic competences analysis.