97 resultados para 005 Computer programming, programs
Resumo:
End-user development (EUD) is much hyped, and its impact has outstripped even the most optimistic forecasts. Even so, the vision of end users programming their own solutions has not yet materialized. This will continue to be so unless we in both industry and the research community set ourselves the ambitious challenge of devising end to end an end-user application development model for developing a new age of EUD tools. We have embarked on this venture, and this paper presents the main insights and outcomes of our research and development efforts as part of a number of successful EU research projects. Our proposal not only aims to reshape software engineering to meet the needs of EUD but also to refashion its components as solution building blocks instead of programs and software developments. This way, end users will really be empowered to build solutions based on artefacts akin to their expertise and understanding of ideal solutions
Resumo:
Tanto los robots autnomos mviles como los robots mviles remotamente operados se utilizan con xito actualmente en un gran nmero de mbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploracin espacial. Sin embargo, es difcil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeo de las funciones para las que fue diseado, siendo una de ellas la fiabilidad. En el caso de la mayora de los sistemas fsicos se detecta una degradacin en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generacin de un sistema software se focaliza la atencin en la etapa de codificacin, podra plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, vlidos para desempear el mismo cometido, segn los posibles defectos que pudieran introducir los programadores. Este estudio bsico podra tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crtico o establecer procedimientos de verificacin y validacin, ms exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigacin se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot mvil especfico. La hiptesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificacin experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecan una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligi de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batera de pruebas ms exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma ms completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado nmero de pruebas con un robot real, as como para evitar daos en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtencin de controladores bien ajustados, como en la realizacin de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development process. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to programming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.
Resumo:
This paper describes a framework to combine tabling evalua- tion and constraint logic programming (TCLP). While this combination has been studied previously from a theoretical point of view and some implementations exist, they either suffer from a lack of efficiency, flex- ibility, or generality, or have inherent limitations with respect to the programs they can execute to completion (either with success or fail- ure). Our framework addresses these issues directly, including the ability to check for answer / call entailment, which allows it to terminate in more cases than other approaches. The proposed framework is experimentally compared with existing solutions in order to provide evidence of the mentioned advantages.
Resumo:
El objetivo de este proyecto de investigacin es comparar dos tcnicas matemticas de aproximacin polinmica, las aproximaciones segn el criterio de mnimos cuadrados y las aproximaciones uniformes (minimax). Se describen tanto el mercado actual del cobre, con sus fluctuaciones a lo largo del tiempo, como los distintos modelos matemticos y programas informticos disponibles. Como herramienta informtica se ha seleccionado Matlab, cuya biblioteca matemtica es muy amplia y de uso muy extendido y cuyo lenguaje de programacin es suficientemente potente para desarrollar los programas que se necesiten. Se han obtenido diferentes polinomios de aproximacin sobre una muestra (serie histrica) que recoge la variacin del precio del cobre en los ltimos aos. Se ha analizado la serie histrica completa y dos tramos significativos de ella. Los resultados obtenidos incluyen valores de inters para otros proyectos. Abstract The aim of this research project is to compare two mathematical models for estimating polynomial approximation, the approximations according to the criterion of least squares approximations uniform (Minimax). Describes both the copper current market, fluctuating over time as different computer programs and mathematical models available. As a modeling tool is selected main Matlab which math library is the largest and most widely used programming language and which is powerful enough to allow you to develop programs that are needed. We have obtained different approximating polynomials, applying mathematical methods chosen, a sample (historical series) which indicates the fluctuation in copper prices in last years. We analyzed the complete historical series and two significant sections of it. The results include values that we consider relevant to other projects
Resumo:
Las pruebas de software (Testing) son en la actualidad la tcnica ms utilizada para la validacin y la evaluacin de la calidad de un programa. El testing est integrado en todas las metodologas prcticas de desarrollo de software y juega un papel crucial en el xito de cualquier proyecto de software. Desde las unidades de cdigo ms pequeas a los componentes ms complejos, su integracin en un sistema de software y su despliegue a produccin, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de produccin. La mayor limitacin del testing de software es que contina siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatizacin resulta fundamental para aliviar estos altos costes. La generacin automtica de casos de pruebas (TCG, del ingls test case generation) es el proceso de generar automticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y ms concretamente en una de las tcnicas ms utilizadas actualmente, la ejecucin simblica. En ejecucin simblica, el programa bajo pruebas es ejecutado con expresiones simblicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generacin automtica de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programacin lgica con restricciones (CLP, del ingls constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simblicamente utilizando los mecanismos de evaluacin estndar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinmicas. Mejorar la escalabilidad y la eficiencia de la ejecucin simblica constituye un reto muy importante. Es bien sabido que la ejecucin simblica resulta impracticable debido al gran nmero de caminos de ejecucin que deben ser explorados y a tamao de las restricciones que se deben manipular. Adems, la generacin de casos de prueba mediante ejecucin simblica tiende a producir un nmero innecesariamente grande de casos de prueba cuando es aplicada a programas de tamao medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generacin de casos de prueba, el cual busca aliviar el problema de la explosin de caminos interprocedimiento analizando de forma separada cada componente (p.ej. mtodo) del programa bajo pruebas, almacenando los resultados y reutilizndolos incrementalmente hasta obtener resultados para el programa completo. Tambin se ha desarrollado un enfoque composicional basado en especializacin de programas (evaluacin parcial) para la herramienta de ejecucin simblica Symbolic PathFinder (SPF). (2) Se propone una metodologa para usar informacin del consumo de recursos del programa bajo pruebas para guiar la ejecucin simblica hacia aquellas partes del programa que satisfacen una determinada poltica de recursos, evitando la exploracin de aquellas partes del programa que violan dicha poltica. (3) Se propone una metodologa genrica para guiar la ejecucin simblica hacia las partes ms interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecucin de acuerdo a criterios de seleccin estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinmica global (heap) durante ejecucin simblica, el cual mejora considerablemente el rendimiento de la tcnica estndar utilizada para este propsito, la \lazy initialization". (5) Todas las tcnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido tambin implementado en la herramienta SPF). Mediante evaluacin experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecucin simblica y la generacin de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
La seguridad verificada es una metodologa para demostrar propiedades de seguridad de los sistemas informticos que se destaca por las altas garantas de correccin que provee. Los sistemas informticos se modelan como programas probabilsticos y para probar que verifican una determinada propiedad de seguridad se utilizan tcnicas rigurosas basadas en modelos matemticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automticos para construir demostraciones completamente formales cuya correccin es certificada mecnicamente (por ordenador). La seguridad verificada demostr ser una tcnica muy efectiva para razonar sobre diversas nociones de seguridad en el rea de criptografa. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad aproximada. La caracterstica distintiva de estas nociones de seguridad es que se expresan como una condicin de similitud entre las distribuciones de salida de dos programas probabilsticos y esta similitud se cuantifica usando alguna nocin de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas reas como la minera de datos privados, el anlisis de flujo de informacin y la criptografa. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementacin concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una nocin de privacidad que ha recibido mucha atencin en los ltimos aos y tiene como objetivo evitar la publicacin datos confidenciales en la minera de datos. La falta de tcnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lgicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribucin terica es una versin quantitativa de una lgica de Hoare relacional para programas probabilsticos. Las pruebas de correcin de estas lgicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, adems, una herramienta para razonar sobre propiedades de programas a travs de estas lgicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografa en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodologa construyendo pruebas certificadas por ordendor de varios sistemas cuyo anlisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construccin para disear funciones de hash seguras sobre curvas elpticas y algoritmos diferencialmente privados para varios problemas de optimizacin combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a similarity condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build safe hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and dont provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
We present a novel general resource analysis for logic programs based on sized types.Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for both, inferring size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
We present a novel analysis for relating the sizes of terms and subterms occurring at diferent argument positions in logic predicates. We extend and enrich the concept of sized type as a representation that incorporates structural (shape) information and allows expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. For example, expressing bounds on the length of lists of numbers, together with bounds on the values of all of their elements. The analysis is developed using abstract interpretation and the novel abstract operations are based on setting up and solving recurrence relations between sized types. It has been integrated, together with novel resource usage and cardinality analyses, in the abstract interpretation framework in the Ciao preprocessor, CiaoPP, in order to assess both the accuracy of the new size analysis and its usefulness in the resource usage estimation application. We show that the proposed sized types are a substantial improvement over the previous size analyses present in CiaoPP, and also benefit the resource analysis considerably, allowing the inference of equal or better bounds than comparable state of the art systems.
Resumo:
The crop simulation model AquaCrop, recently developed by FAO can be used for a wide range of purposes. However, in its present form, its use over large areas or for applications that require a large number of simulations runs (e.g., long-term analysis), is not practical without developing software to facilitate such applications. Two tools for managing the inputs and outputs of AquaCrop, named AquaData and AquaGIS, have been developed for this purpose and are presented here. Both software utilities have been programmed in Delphi v. 5 and in addition, AquaGIS requires the Geographic Information System (GIS) programming tool MapObjects. These utilities allow the efficient management of input and output files, along with a GIS module to develop spatial analysis and effect spatial visualization of the results, facilitating knowledge dissemination. A sample of application of the utilities is given here, as an AquaCrop simulation analysis of impact of climate change on wheat yield in Southern Spain, which requires extensive input data preparation and output processing. The use of AquaCrop without the two utilities would have required approximately 1000 h of work, while the utilization of AquaData and AquaGIS reduced that time by more than 99%. Furthermore, the use of GIS, made it possible to perform a spatial analysis of the results, thus providing a new option to extend the use of the AquaCrop model to scales requiring spatial and temporal analyses.
Resumo:
Los lenguajes de programacin son el idioma que los programadores usamos para comunicar a los computadores qu queremos que hagan. Desde el lenguaje ensamblador, que traduce una a una las instrucciones que interpreta un computador hasta lenguajes de alto nivel, se ha buscado desarrollar lenguajes ms cercanos a la forma de pensar y expresarse de los humanos. Los lenguajes de programacin lgicos como Prolog utilizan a su vez el lenguaje de la lgica de 1er orden de modo que el programador puede expresar las premisas del problema que se quiere resolver sin preocuparse del cmo se va a resolver dicho problema. La resolucin del problema se equipara a encontrar una deduccin del objetivo a alcanzar a partir de las premisas y equivale a lo que entendemos por la ejecucin de un programa. Ciao es una implementacin de Prolog (http://www.ciao-lang.org) y utiliza el mtodo de resolucin SLD, que realiza el recorrido de los rboles de decisin en profundidad(depth-first) lo que puede derivar en la ejecucin de una rama de busqueda infinita (en un bucle infinito) sin llegar a dar respuestas. Ciao, al ser un sistema modular, permite la utilizacin de extensiones para implementar estrategias de resolucin alternativas como la tabulacin (OLDT). La tabulacin es un mtodo alternativo que se basa en memorizar las llamadas realizadas y sus respuestas para no repetir llamadas y poder usar las respuestas sin recomputar las llamadas. Algunos programas que con SLD entran en un bucle infinito, gracias a la tabulacin dn todas las respuestas y termina. El modulo tabling es una implementacin de tabulacin mediante el algoritmo CHAT. Esta implementacin es una versin beta que no tiene implementado un manejador de memoria. Entendemos que la gestin de memoria en el mdulo de tabling tiene gran importancia, dado que la resolucin con tabulacin permite reducir el tiempo de computacin (al no repetir llamadas), aumentando los requerimientos de memoria (para guardar las llamadas y las respuestas). Por lo tanto, el objetivo de este trabajo es implementar un mecanismo de gestin de la memoria en Ciao con el mdulo tabling cargado. Para ello se ha realizado la implementacin de: Un mecanismo de captura de errores que: detecta cuando el computador se queda sin memoria y activa la reinicializacin del sitema. Un procedimiento que ajusta los punteros del modulo de tabling que apuntan a la WAM tras un proceso de realojo de algunas de las reas de memoria de la WAM. Un gestor de memoria del modulo de tabling que detecta c realizar una ampliacin de las reas de memoria del modulo de tabling, realiza la solicitud de ms memoria y realiza el ajuste de los punteros. Para ayudar al lector no familiarizado con este tema, describimos los datos que Ciao y el mdulo de tabling alojan en las reas de memoria dinmicas que queremos gestionar. Los casos de pruebas desarrollados para evaluar la implementacin del gestor de memoria, ponen de manifiesto que: Disponer de un gestor de memoria dinmica permite la ejecucin de programas en un mayor nmero de casos. La poltica de gestin de memoria incide en la velocidad de ejecucin de los programas. ---ABSTRACT---Programming languages are the language that programmers use in order to communicate to computers what we want them to do. Starting from the assembly language, which translates one by one the instructions to the computer, and arriving to highly complex languages, programmers have tried to develop programming languages that resemble more closely the way of thinking and communicating of human beings. Logical programming languages, such as Prolog, use the language of logic of the first order so that programmers can express the premise of the problem that they want to solve without having to solve the problem itself. The solution to the problem is equal to finding a deduction of the objective to reach starting from the premises and corresponds to what is usually meant as the execution of a program. Ciao is an implementation of Prolog (http://www.ciao-lang.org) and uses the method of resolution SLD that carries out the path of the decision trees in depth (depth-frist). This can cause the execution of an infinite searching branch (an infinite loop) without getting to an answer. Since Ciao is a modular system, it allows the use of extensions to implement alternative resolution strategies, such as tabulation (OLDT). Tabulation is an alternative method that is based on the memorization of executions and their answers, in order to avoid the repetition of executions and to be able to use the answers without reexecutions. Some programs that get into an infinite loop with SLD are able to give all the answers and to finish thanks to tabulation. The tabling package is an implementation of tabulation through the algorithm CHAT. This implementation is a beta version which does not present a memory handler. The management of memory in the tabling package is highly important, since the solution with tabulation allows to reduce the system time (because it does not repeat executions) and increases the memory requirements (in order to save executions and answers). Therefore, the objective of this work is to implement a memory management mechanism in Ciao with the tabling package loaded. To achieve this goal, the following implementation were made: An error detection system that reveals when the computer is left without memory and activate the reinizialitation of the system. A procedure that adjusts the pointers of the tabling package which points to the WAM after a process of realloc of some of the WAM memory stacks. A memory manager of the tabling package that detects when it is necessary to expand the memory stacks of the tabling package, requests more memory, and adjusts the pointers. In order to help the readers who are not familiar with this topic, we described the data which Ciao and the tabling package host in the dynamic memory stacks that we want to manage. The test cases developed to evaluate the implementation of the memory manager show that: A manager for the dynamic memory allows the execution of programs in a larger number of cases. Memory management policy influences the program execution speed.
Resumo:
La tecnologa moderna de computacin ha permitido cambiar radicalmente la investigacin tecnolgica en todos los mbitos. El proceso general utilizado previamente consista en el desarrollo de prototipos analgicos, creando mltiples versiones del mismo hasta llegar al resultado adecuado. Este es un proceso costoso a nivel econmico y de carga de trabajo. Es por ello por lo que el proceso de investigacin actual aprovecha las nuevas tecnologas para lograr el objetivo final mediante la simulacin. Gracias al desarrollo de software para la simulacin de distintas reas se ha incrementado el ritmo de crecimiento de los avances tecnolgicos y reducido el coste de los proyectos en investigacin y desarrollo. La simulacin, por tanto, permite desarrollar previamente prototipos simulados con un coste mucho menor para as lograr un producto final, el cual ser llevado a cabo en su mbito correspondiente. Este proceso no slo se aplica en el caso de productos con circuitera, si bien es utilizado tambin en productos programados. Muchos de los programas actuales trabajan con algoritmos concretos cuyo funcionamiento debe ser comprobado previamente, para despus centrarse en la codificacin del mismo. Es en este punto donde se encuentra el objetivo de este proyecto, simular algoritmos de procesado digital de la seal antes de la codificacin del programa final. Los sistemas de audio estn basados en su totalidad en algoritmos de procesado de la seal, tanto analgicos como digitales, siendo estos ltimos los que estn sustituyendo al mundo analgico mediante los procesadores y los ordenadores. Estos algoritmos son la parte ms compleja del sistema, y es la creacin de nuevos algoritmos la base para lograr sistemas de audio novedosos y funcionales. Se debe destacar que los grupos de desarrollo de sistemas de audio presentan un amplio nmero de miembros con cometidos diferentes, separando las funciones de programadores e ingenieros de la seal de audio. Es por ello por lo que la simulacin de estos algoritmos es fundamental a la hora de desarrollar nuevos y ms potentes sistemas de audio. Matlab es una de las herramientas fundamentales para la simulacin por ordenador, la cual presenta utilidades para desarrollar proyectos en distintos mbitos. Sin embargo, en creciente uso actualmente se encuentra el software Simulink, herramienta especializada en la simulacin de alto nivel que simplifica la dificultad de la programacin en Matlab y permite desarrollar modelos de forma ms rpida. Simulink presenta una completa funcionalidad para el desarrollo de algoritmos de procesado digital de audio. Por ello, el objetivo de este proyecto es el estudio de las capacidades de Simulink para generar sistemas de audio funcionales. A su vez, este proyecto pretende profundizar en los mtodos de procesado digital de la seal de audio, logrando al final un paquete de sistemas de audio compatible con los programas de edicin de audio actuales. ABSTRACT. Modern computer technology has dramatically changed the technological research in multiple areas. The overall process previously used consisted of the development of analog prototypes, creating multiple versions to reach the proper result. This is an expensive process in terms of an economically level and workload. For this reason actual investigation process take advantage of the new technologies to achieve the final objective through simulation. Thanks to the software development for simulation in different areas the growth rate of technological progress has been increased and the cost of research and development projects has been decreased. Hence, simulation allows previously the development of simulated protoypes with a much lower cost to obtain a final product, which will be held in its respective field. This process is not only applied in the case of circuitry products, but is also used in programmed products. Many current programs work with specific algorithms whose performance should be tested beforehand, which allows focusing on the codification of the program. This is the main point of this project, to simulate digital signal processing algorithms before the codification of the final program. Audio systems are entirely based on signal processing, both analog and digital systems, being the digital systems which are replacing the analog world thanks to the processors and computers. This algorithms are the most complex part of every system, and the creation of new algorithms is the most important step to achieve innovative and functional new audio systems. It should be noted that development groups of audio systems have a large number of members with different roles, separating them into programmers and audio signal engineers. For this reason, the simulation of this algorithms is essential when developing new and more powerful audio systems. Matlab is one of the most important tools for computer simulation, which has utilities to develop projects in different areas. However, the use of the Simulink software is constantly growing. It is a simulation tool specialized in high-level simulations which simplifies the difficulty of programming in Matlab and allows the developing of models faster. Simulink presents a full functionality for the development of algorithms for digital audio processing. Therefore, the objective of this project is to study the posibilities of Simulink to generate funcional audio systems. In turn, this projects aims to get deeper into the methods of digital audio signal processing, making at the end a software package of audio systems compatible with the current audio editing software.
Resumo:
This paper is a preliminary version of Chapter 3 of a State-of-the-Art Report by the IASS Working Group 5: Concrete Shell Roofs. The intention of this chapter is to set forth for those who intend to design concrete shell roofs information and advice about the selection, verification and utilization of commercial computer tools for analysis and design tasks.The computer analysis and design steps for a concrete shell roof are described. Advice follows on the aspects to be considered in the application of commercial finite element (FE)computer programs to concrete shell analysis, starting with recommendations on how novices can gain confidence and competence in the use of software. To establish vocabulary and provide background references, brief surveys are presented of, first,element types and formulations for shells and, second, challenges presented by advanced analyses of shells. The final section of the chapter indicates what capabilities to seek in selecting commercial FE software for the analysis and design of concrete shell roofs. Brief concluding remarks summarize advice regarding judicious use of computer analysis in design practice.
Resumo:
We present a novel general resource analysis for logic programs based on sized types. Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing resource analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for inferring both size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
A method for formulating and algorithmically solving the equations of finite element problems is presented. The method starts with a parametric partition of the domain in juxtaposed strips that permits sweeping the whole region by a sequential addition (or removal) of adjacent strips. The solution of the difference equations constructed over that grid proceeds along with the addition removal of strips in a manner resembling the transfer matrix approach, except that different rules of composition that lead to numerically stable algorithms are used for the stiffness matrices of the strips. Dynamic programming and invariant imbedding ideas underlie the construction of such rules of composition. Among other features of interest, the present methodology provides to some extent the analyst's control over the type and quantity of data to be computed. In particular, the one-sweep method presented in Section 9, with no apparent counterpart in standard methods, appears to be very efficient insofar as time and storage is concerned. The paper ends with the presentation of a numerical example