947 resultados para 005 Computer programming, programs
Resumo:
We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. The documentation can be generated in many formats including t e x i n f o, dvi, ps, pdf, inf o, html/css, Unix nrof f/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genrate "man" pages (Unix man page format), nicely formatted plain ascii "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or inf o formats suitable for inclusin in on-line ndices of manuals, and even complete WWW and inf o sites containing on-line catalogs of documents and software distributions. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versin of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. These assertions and comments are written using the Ciao system assertion language. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The lpdoc manual, all other Ciao system manuals, and most of this paper, are generated by lpdoc.
Resumo:
Abstract is not available.
Resumo:
This paper discusses some issues which arise in the dataflow analysis of constraint logic programming (CLP) languages. The basic technique applied is that of abstract interpretation. First, some types of optimizations possible in a number of CLP systems (including efficient parallelization) are presented and the information that has to be obtained at compile-time in order to be able to implement such optimizations is considered. Two approaches are then proposed and discussed for obtaining this information for a CLP program: one based on an analysis of a CLP metainterpreter using standard Prolog analysis tools, and a second one based on direct analysis of the CLP program. For the second approach an abstract domain which approximates groundness (also referred to as "definiteness") information (i.e. constraint to a single valu) and the related abstraction functions are presented.
Resumo:
Dynamic scheduling increases the expressive power of logic programming languages, but also introduces some overhead. In this paper we present two classes of program transformations designed to reduce this additional overhead, while preserving the operational semantics of the original programs, modulo ordering of literals woken at the same time. The first class of transformations simplifies the delay conditions while the second class moves delayed literals later in the rule body. Application of the program transformations can be automated using information provided by compile-time analysis. We provide experimental results obtained from an implementation of the proposed techniques using the CIAO prototype compiler. Our results show that the techniques can lead to substantial performance improvement.
Resumo:
The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process.
Resumo:
The objective of this thesis is model some processes from the nature as evolution and co-evolution, and proposing some techniques that can ensure that these learning process really happens and useful to solve some complex problems as Go game. The Go game is ancient and very complex game with simple rules which still is a challenge for the Artificial Intelligence. This dissertation cover some approaches that were applied to solve this problem, proposing solve this problem using competitive and cooperative co-evolutionary learning methods and other techniques proposed by the author. To study, implement and prove these methods were used some neural networks structures, a framework free available and coded many programs. The techniques proposed were coded by the author, performed many experiments to find the best configuration to ensure that co-evolution is progressing and discussed the results. Using co-evolutionary learning processes can be observed some pathologies which could impact co-evolution progress. In this dissertation is introduced some techniques to solve pathologies as loss of gradients, cycling dynamics and forgetting. According to some authors, one solution to solve these co-evolution pathologies is introduce more diversity in populations that are evolving. In this thesis is proposed some techniques to introduce more diversity and some diversity measurements for neural networks structures to monitor diversity during co-evolution. The genotype diversity evolved were analyzed in terms of its impact to global fitness of the strategies evolved and their generalization. Additionally, it was introduced a memory mechanism in the network neural structures to reinforce some strategies in the genes of the neurons evolved with the intention that some good strategies learned are not forgotten. In this dissertation is presented some works from other authors in which cooperative and competitive co-evolution has been applied. The Go board size used in this thesis was 9x9, but can be easily escalated to more bigger boards.The author believe that programs coded and techniques introduced in this dissertation can be used for other domains.
Resumo:
End-user development (EUD) is much hyped, and its impact has outstripped even the most optimistic forecasts. Even so, the vision of end users programming their own solutions has not yet materialized. This will continue to be so unless we in both industry and the research community set ourselves the ambitious challenge of devising end to end an end-user application development model for developing a new age of EUD tools. We have embarked on this venture, and this paper presents the main insights and outcomes of our research and development efforts as part of a number of successful EU research projects. Our proposal not only aims to reshape software engineering to meet the needs of EUD but also to refashion its components as solution building blocks instead of programs and software developments. This way, end users will really be empowered to build solutions based on artefacts akin to their expertise and understanding of ideal solutions
Resumo:
Tanto los robots autnomos mviles como los robots mviles remotamente operados se utilizan con xito actualmente en un gran nmero de mbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploracin espacial. Sin embargo, es difcil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeo de las funciones para las que fue diseado, siendo una de ellas la fiabilidad. En el caso de la mayora de los sistemas fsicos se detecta una degradacin en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generacin de un sistema software se focaliza la atencin en la etapa de codificacin, podra plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, vlidos para desempear el mismo cometido, segn los posibles defectos que pudieran introducir los programadores. Este estudio bsico podra tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crtico o establecer procedimientos de verificacin y validacin, ms exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigacin se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot mvil especfico. La hiptesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificacin experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecan una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligi de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batera de pruebas ms exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma ms completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado nmero de pruebas con un robot real, as como para evitar daos en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtencin de controladores bien ajustados, como en la realizacin de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development process. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to programming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.
Resumo:
This paper describes a framework to combine tabling evalua- tion and constraint logic programming (TCLP). While this combination has been studied previously from a theoretical point of view and some implementations exist, they either suffer from a lack of efficiency, flex- ibility, or generality, or have inherent limitations with respect to the programs they can execute to completion (either with success or fail- ure). Our framework addresses these issues directly, including the ability to check for answer / call entailment, which allows it to terminate in more cases than other approaches. The proposed framework is experimentally compared with existing solutions in order to provide evidence of the mentioned advantages.
Resumo:
El objetivo de este proyecto de investigacin es comparar dos tcnicas matemticas de aproximacin polinmica, las aproximaciones segn el criterio de mnimos cuadrados y las aproximaciones uniformes (minimax). Se describen tanto el mercado actual del cobre, con sus fluctuaciones a lo largo del tiempo, como los distintos modelos matemticos y programas informticos disponibles. Como herramienta informtica se ha seleccionado Matlab, cuya biblioteca matemtica es muy amplia y de uso muy extendido y cuyo lenguaje de programacin es suficientemente potente para desarrollar los programas que se necesiten. Se han obtenido diferentes polinomios de aproximacin sobre una muestra (serie histrica) que recoge la variacin del precio del cobre en los ltimos aos. Se ha analizado la serie histrica completa y dos tramos significativos de ella. Los resultados obtenidos incluyen valores de inters para otros proyectos. Abstract The aim of this research project is to compare two mathematical models for estimating polynomial approximation, the approximations according to the criterion of least squares approximations uniform (Minimax). Describes both the copper current market, fluctuating over time as different computer programs and mathematical models available. As a modeling tool is selected main Matlab which math library is the largest and most widely used programming language and which is powerful enough to allow you to develop programs that are needed. We have obtained different approximating polynomials, applying mathematical methods chosen, a sample (historical series) which indicates the fluctuation in copper prices in last years. We analyzed the complete historical series and two significant sections of it. The results include values that we consider relevant to other projects
Resumo:
Las pruebas de software (Testing) son en la actualidad la tcnica ms utilizada para la validacin y la evaluacin de la calidad de un programa. El testing est integrado en todas las metodologas prcticas de desarrollo de software y juega un papel crucial en el xito de cualquier proyecto de software. Desde las unidades de cdigo ms pequeas a los componentes ms complejos, su integracin en un sistema de software y su despliegue a produccin, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de produccin. La mayor limitacin del testing de software es que contina siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatizacin resulta fundamental para aliviar estos altos costes. La generacin automtica de casos de pruebas (TCG, del ingls test case generation) es el proceso de generar automticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y ms concretamente en una de las tcnicas ms utilizadas actualmente, la ejecucin simblica. En ejecucin simblica, el programa bajo pruebas es ejecutado con expresiones simblicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generacin automtica de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programacin lgica con restricciones (CLP, del ingls constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simblicamente utilizando los mecanismos de evaluacin estndar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinmicas. Mejorar la escalabilidad y la eficiencia de la ejecucin simblica constituye un reto muy importante. Es bien sabido que la ejecucin simblica resulta impracticable debido al gran nmero de caminos de ejecucin que deben ser explorados y a tamao de las restricciones que se deben manipular. Adems, la generacin de casos de prueba mediante ejecucin simblica tiende a producir un nmero innecesariamente grande de casos de prueba cuando es aplicada a programas de tamao medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generacin de casos de prueba, el cual busca aliviar el problema de la explosin de caminos interprocedimiento analizando de forma separada cada componente (p.ej. mtodo) del programa bajo pruebas, almacenando los resultados y reutilizndolos incrementalmente hasta obtener resultados para el programa completo. Tambin se ha desarrollado un enfoque composicional basado en especializacin de programas (evaluacin parcial) para la herramienta de ejecucin simblica Symbolic PathFinder (SPF). (2) Se propone una metodologa para usar informacin del consumo de recursos del programa bajo pruebas para guiar la ejecucin simblica hacia aquellas partes del programa que satisfacen una determinada poltica de recursos, evitando la exploracin de aquellas partes del programa que violan dicha poltica. (3) Se propone una metodologa genrica para guiar la ejecucin simblica hacia las partes ms interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecucin de acuerdo a criterios de seleccin estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinmica global (heap) durante ejecucin simblica, el cual mejora considerablemente el rendimiento de la tcnica estndar utilizada para este propsito, la \lazy initialization". (5) Todas las tcnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido tambin implementado en la herramienta SPF). Mediante evaluacin experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecucin simblica y la generacin de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
La seguridad verificada es una metodologa para demostrar propiedades de seguridad de los sistemas informticos que se destaca por las altas garantas de correccin que provee. Los sistemas informticos se modelan como programas probabilsticos y para probar que verifican una determinada propiedad de seguridad se utilizan tcnicas rigurosas basadas en modelos matemticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automticos para construir demostraciones completamente formales cuya correccin es certificada mecnicamente (por ordenador). La seguridad verificada demostr ser una tcnica muy efectiva para razonar sobre diversas nociones de seguridad en el rea de criptografa. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad aproximada. La caracterstica distintiva de estas nociones de seguridad es que se expresan como una condicin de similitud entre las distribuciones de salida de dos programas probabilsticos y esta similitud se cuantifica usando alguna nocin de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas reas como la minera de datos privados, el anlisis de flujo de informacin y la criptografa. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementacin concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una nocin de privacidad que ha recibido mucha atencin en los ltimos aos y tiene como objetivo evitar la publicacin datos confidenciales en la minera de datos. La falta de tcnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lgicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribucin terica es una versin quantitativa de una lgica de Hoare relacional para programas probabilsticos. Las pruebas de correcin de estas lgicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, adems, una herramienta para razonar sobre propiedades de programas a travs de estas lgicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografa en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodologa construyendo pruebas certificadas por ordendor de varios sistemas cuyo anlisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construccin para disear funciones de hash seguras sobre curvas elpticas y algoritmos diferencialmente privados para varios problemas de optimizacin combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a similarity condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build safe hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and dont provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
We present a novel general resource analysis for logic programs based on sized types.Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for both, inferring size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
We present a novel analysis for relating the sizes of terms and subterms occurring at diferent argument positions in logic predicates. We extend and enrich the concept of sized type as a representation that incorporates structural (shape) information and allows expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. For example, expressing bounds on the length of lists of numbers, together with bounds on the values of all of their elements. The analysis is developed using abstract interpretation and the novel abstract operations are based on setting up and solving recurrence relations between sized types. It has been integrated, together with novel resource usage and cardinality analyses, in the abstract interpretation framework in the Ciao preprocessor, CiaoPP, in order to assess both the accuracy of the new size analysis and its usefulness in the resource usage estimation application. We show that the proposed sized types are a substantial improvement over the previous size analyses present in CiaoPP, and also benefit the resource analysis considerably, allowing the inference of equal or better bounds than comparable state of the art systems.