948 resultados para generic finiteness
Resumo:
The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.
Resumo:
Ciao Prolog incorporates a module system which allows sepárate compilation and sensible creation of standalone executables. We describe some of the main aspects of the Ciao modular compiler, ciaoc, which takes advantage of the characteristics of the Ciao Prolog module system to automatically perform sepárate and incremental compilation and efficiently build small, standalone executables with competitive run-time performance, ciaoc can also detect statically a larger number of programming errors. We also present a generic code processing library for handling modular programs, which provides an important part of the functionality of ciaoc. This library allows the development of program analysis and transformation tools in a way that is to some extent orthogonal to the details of module system design, and has been used in the implementation of ciaoc and other Ciao system tools. We also describe the different types of executables which can be generated by the Ciao compiler, which offer different tradeoffs between executable size, startup time, and portability, depending, among other factors, on the linking regime used (static, dynamic, lazy, etc.). Finally, we provide experimental data which illustrate these tradeoffs.
Resumo:
Information generated by abstract interpreters has long been used to perform program specialization. Additionally, if the abstract interpreter generates a multivariant analysis, it is also possible to perform múltiple specialization. Information about valúes of variables is propagated by simulating program execution and performing fixpoint computations for recursive calis. In contrast, traditional partial evaluators (mainly) use unfolding for both propagating valúes of variables and transforming the program. It is known that abstract interpretation is a better technique for propagating success valúes than unfolding. However, the program transformations induced by unfolding may lead to important optimizations which are not directly achievable in the existing frameworks for múltiple specialization based on abstract interpretation. The aim of this work is to devise a specialization framework which integrates the better information propagation of abstract interpretation with the powerful program transformations performed by partial evaluation, and which can be implemented via small modifications to existing generic abstract interpreters. With this aim, we will relate top-down abstract interpretation with traditional concepts in partial evaluation and sketch how the sophisticated techniques developed for controlling partial evaluation can be adapted to the proposed specialization framework. We conclude that there can be both practical and conceptual advantages in the proposed integration of partial evaluation and abstract interpretation.
Resumo:
Abstract is not available
Resumo:
We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.
Resumo:
In this genre analysis research paper, we compare U.S. patents, contracts, and regulations on technical matters with a focus upon the relation between vagueness and communicative purposes and subpurposes of these three genres. Our main interest is the investigation of intergeneric conventions across the three genres, based on the software analysis of three corpora (one for each genre, 1 million words per corpus). The result of the investigation is that intergeneric conventions are found at the level of types of expressed linguistic vagueness, but that intergeneric conventions at the level of actual formulations are rare. The conclusion is that at this latter level the influence from the situation type underlying the individual genre is more important than the overarching legal character of the genres, when we talk about introducing explicit vagueness in the text.
Resumo:
Current bias estimation algorithms for air traffic control (ATC) surveillance are focused on radar sensors, but the integration of new sensors (especially automatic dependent surveillance-broadcast and wide area multilateration) demands the extension of traditional procedures. This study describes a generic architecture for bias estimation applicable to multisensor multitarget surveillance systems. It consists on first performing bias estimations using measurements from each target, of a subset of sensors, assumed to be reliable, forming track bias estimations. All track bias estimations are combined to obtain, for each of those sensors, the corresponding sensor bias. Then, sensor bias terms are corrected, to subsequently calculate the target or sensor-target pair specific biases. Once these target-specific biases are corrected, the process is repeated recursively for other sets of less reliable sensors, assuming bias corrected measures from previous iterations are unbiased. This study describes the architecture and outlines the methodology for the estimation and the bias estimation design processes. Then the approach is validated through simulation, and compared with previous methods in the literature. Finally, the study describes the application of the methodology to the design of the bias estimation procedures for a modern ATC surveillance application, specifically for off-line assessment of ATC surveillance performance.
Resumo:
The competency assessment is a key issue for improving the quality of teaching and learning within the current European Higher Education Area (EHEA). The aim of this paper is to review the current research on assessment of generic competences, especially through online tools. It has conducted a search of the Web of Knowledge (Thomson Reuters) from keywords. It have been reviewed the abstracts and the results have been classified by time periods, countries and research area. It has selected a set of articles of the period 2010?2012, in which we have analyzed future trends. It is concluded that the research of assessment generic competences is been developing nowadays in educational area, although is still more important in the professional one. Additionally, it is surprising that appears most often used in preuniversity education levels. The academic context has increased research activities over the past five years, with different developments in the Anglo?Saxon countries over that those countries attached to the Bologna Process. The latest reports indicate that the learning competences must meet the individual reality of each person, so that the use of ICTs in their development and evaluation are essential given its immediacy and motivational ability. There is a clear trend towards an evaluation model that includes a 360º specific and generic competences analysis.
Resumo:
A hard-in-amplitude transition to chaos in a class of dissipative flows of broad applicability is presented. For positive values of a parameter F, no matter how small, a fully developed chaotic attractor exists within some domain of additional parameters, whereas no chaotic behavior exists for F < 0. As F is made positive, an unstable fixed point reaches an invariant plane to enter a phase half-space of physical solutions; the ghosts of a line of fixed points and a rich heteroclinic structure existing at F = 0 make the limits t --* +oc, F ~ +0 non-commuting, and allow an exact description of the chaotic flow. The formal structure of flows that exhibit the transition is determined. A subclass of such flows (coupled oscillators in near-resonance at any 2 : q frequency ratio, with F representing linear excitation of the first oscillator) is fully analysed
Resumo:
The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies. The aim of this study is to evaluate the effects obtained after applying active learning methodologies to the achievement of generic competences as well as to the academic performance. This study has been carried out at the Universidad Politécnica de Madrid, where these methodologies have been applied to the Operating Systems I subject of the degree in Technical Engineering in Computer Systems. The fundamental hypothesis tested was whether the implementation of active learning methodologies (cooperative learning and problem based learning) favours the achievement of certain generic competences (‘teamwork’ and ‘planning and time management’) and also whether this fact improved the academic performance of our students. The original approach of this work consists in using psychometric tests to measure the degree of acquired student’s generic competences instead of using opinion surveys, as usual. Results indicated that active learning methodologies improve the academic performance when compared to the traditional lecture/discussion method, according to the success rate obtained. These methods seem to have as well an effect on the teamwork competence (the perception of the behaviour of the other members in the group) but not on the perception of each students’ behaviour. Active learning does not produce any significant change in the generic competence ‘planning and time management'.
Resumo:
The new degrees in Spanish universities generated as a result of the Bologna process, stress a new dimension: the generic competencies to be acquired by university students (leadership, problem solving, respect for the environment, etc.). At Universidad Polite¿cnica de Madrid a teaching model was defined for two degrees: Graduate in Computer Engineering and Graduate in Software Engineering. Such model incorporates the training, development and assessment of generic competencies planned in these curricula. The aim of this paper is to describe how this model was implemented in both degrees. The model has three components. The first refers to a set of seven activities for introducing mechanisms for training, development and assessment of generic competencies. The second component aims to coordinate actions that implement the competencies across courses (in space and time). The third component consists of a series of activities to perform quality control. The implementation of generic competencies was carried out in first year courses (first and second semesters), together with the planning for second year courses (third and fourth semesters). We managed to involve a high percentage of first-year courses (80%) and the contacts that have been initiated suggest a high percentage in the second year as well.
Resumo:
La iniciativa FIWARE ofrece un conjunto de APIs potentes que proporcionan la base para una innovación rápida y eficiente en el Internet del Futuro. Estas APIs son clave en el desarrollo de aplicaciones que usan tecnologías muy recientes e innovadoras, como el Internet de las cosas o la Gestión de Identidad en módulos de seguridad. Este documento presenta el desarrollo de una aplicación web de FIWARE usando componentes virtualizados en máquinas virtuales. La aplicación web está basada en “la fábrica de chocolate de Willy Wonka” como una implementación metafórica de una aplicación de seguridad e IoT en un entorno industrial. El componente principal es un servidor web en node.js que conecta con varios componentes de FIWARE, conocidos como “Generic Enablers”. La implementación está compuesta por dos módulos principales: el módulo de IoT y el módulo de seguridad. El módulo de IoT gestiona los sensores instalados por Willy Wonka en las salas de fábrica para monitorizar varios parámetros como, por ejemplo, la temperatura, la presión o la ocupación. El módulo de IoT crea y recibe información de contexto de los sensores virtuales. Esta información de contexto es gestionada y almacenada en un componente de FIWARE conocido como Context Broker. El Context Broker está basado en mecanismos de subscripciones que postean los datos de los sensores en la aplicación, en tiempo real y cuando estos cambian. La conexión con el cliente se produce mediante Web Sockets (socket.io). El módulo de seguridad gestiona las cuentas y la información de los usuarios, les autentica en la aplicación usando una cuenta de FIWARE y comprueba la autorización para acceder a distintos recursos. Distintos roles son creados con distintos permisos asignados. Por ejemplo, Willy Wonka puede tener acceso a todos los recursos, mientras que un Oompa Loopa encargado de la sala del chocolate solo deberías de tener acceso a los recursos de su sala. Este módulo está compuesto por tres componentes: el Gestor de Identidades, el PEP Proxy y el PDP AuthZForce. El gestor de identidades almacena las cuentas de FIWARE de los usuarios y permite la autenticación Single Sing On usando el protocolo OAuth2. Tras logearse, los usuarios autenticados reciben un token de autenticación que es usado después por el AuthZForce para comprobar el rol y permiso asociado del usuario. El PEP Proxy actúa como un servidor proxy que redirige las peticiones permitidas y bloquea las no autorizadas.