47 resultados para Manufacturing Execution Systems
em Universidad Politécnica de Madrid
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efcient checking of such certificates.
Resumo:
Choosing an appropriate accounting system for manufacturing has always been a challenge for managers. In this article we try to compare three accounting systems designed since 1980 to address problems of traditional accounting system. In the first place we are going to present a short overview on background and definition of three accounting systems: Activity Based costing, Time-Driven Activity Based Costing and Lean Accounting. Comparisons are made based on the three basic roles of information generated by accounting systems: financial reporting, decision making, and operational control and improvement. The analysis in this paper reveals how decisions are made over the value stream in the companies using Lean Accounting while decisions under the ABC Accounting system are taken at individual product level, and finally we will show how TD-ABC covers both product and process levels for decision making. In addition, this paper shows the importance of nonfinancial measures for operational control and improvement under the Lean Accounting and TD-ABC methods whereas ABC relies mostly on financial measures in this context.
Resumo:
Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalmbricas para aplicaciones de altas prestaciones, y computacin distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autnomos distribuidos de altas prestaciones (por sus siglas en ingls, High-Performance Autonomous Distributed Systems (HPADS)), as como su evolucin hacia el procesamiento de alta resolucin. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energtica, la capacidad de cmputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptacin, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluacin. Esta clase de sistemas suele formar parte de sistemas ms complejos llamados sistemas ciber-fsicos (por sus siglas en ingls, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones mdicas, fabricacin, o aplicaciones aeroespaciales, entre otras muchas. Para el diseo de este tipo de sistemas, aspectos tales como la confiabilidad, la definicin de modelos de computacin, o el uso de metodologas y/o herramientas que faciliten el incremento de la escalabilidad y de la gestin de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus caractersticas pueden ser aplicables en el campo de los CPSs, as como en la propuesta de un nuevo diseo de plataforma de altas prestaciones que se ajuste mejor a los nuevos y ms exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripcin, implementacin y validacin de la plataforma propuesta, as como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseo de la plataforma propuesta se enumeran a continuacin: Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energtico y capacidad de cmputo. Propuesta de tcnicas de gestin del consumo de energa en cada etapa del perfil de trabajo de la plataforma. Propuestas para la inclusin de reconfiguracin dinmica y parcial de la FPGA (por sus siglas en ingls, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecucin y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, as como en la adquisicin y comunicacin de los mismos, adems de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas estn migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas hbridos hardware-software que incluyen varios procesadores, o varios procesadores y lgica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en ingls, System on Chip (SoC)) que incluyen procesadores embebidos y lgica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energtico, precio, capacidad de cmputo y flexibilidad. Estos buenos resultados son an mejores cuando las aplicaciones tienen altos requisitos de cmputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como nico procesador, as como un diseo compatible con la plataforma para redes de sensores inalmbricas desarrollada en el Centro de Electrnica Industrial de la Universidad Politcnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opcin en cuanto a consumo y cantidad de recursos integrados, cuando adems, permite el uso de reconfiguracin dinmica y parcial. Es importante resaltar que aunque los valores de consumo son los mnimos para esta familia de componentes, la potencia instantnea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autnoma, y en la mayora de los casos alimentados por bateras. Por esta razn, es necesario incluir en el diseo estrategias de ahorro energtico para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentacin de forma que slo aquellos elementos que sean estrictamente necesarios permanecern alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operacin y as optimizar enormemente el consumo de energa. El hecho de apagar la FPGA para ahora energa durante los periodos de inactividad, supone la prdida de la configuracin, puesto que la memoria de configuracin es una memoria voltil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguracin total de la plataforma una vez encendida, en este trabajo, se incluye una tcnica para la compresin del archivo de configuracin de la FPGA, de forma que se consiga una reduccin del tiempo de configuracin y por ende de la energa consumida. Aunque varios de los requisitos de diseo pueden satisfacerse con el diseo de la plataforma HiReCookie, es necesario seguir optimizando diversos parmetros tales como el consumo energtico, la tolerancia a fallos y la capacidad de procesamiento. Esto slo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral est centrada en el diseo de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cmputo, Confiabilidad y Consumo de energa) para la mejora de estos parmetros por medio de un uso dinmico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicacin tipo bus, preparada para dar soporte para la gestin dinmica de los recursos internos de la FPGA en tiempo de ejecucin gracias a la inclusin de reconfiguracin dinmica y parcial. Gracias a esta capacidad de reconfiguracin parcial, es posible adaptar los niveles de capacidad de procesamiento, energa consumida o tolerancia a fallos para responder a las demandas de la aplicacin, entorno, o mtricas internas del dispositivo mediante la adaptacin del nmero de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseo de la arquitectura, su implementacin en la plataforma HiReCookie, as como en otra familia de FPGAs, y su validacin por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: Proponer una metodologa basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en ingls, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecucin, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el cdigo de aplicacin. Proponer un diseo y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinmica dependiendo bien de parmetros externos o bien de parmetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura sern posibles gracias a la reconfiguracin dinmica y parcial de aceleradores hardware en tiempo real. Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimizacin de las transacciones en rfaga de datos hacia los aceleradores. Aprovechar las ventajas ofrecidas por la aceleracin lograda por mdulos puramente hardware para conseguir una mejor eficiencia energtica. Ser capaces de cambiar los niveles de redundancia de hardware de forma dinmica segn las necesidades del sistema en tiempo real y sin cambios para el cdigo de aplicacin. Proponer una capa de abstraccin entre el cdigo de aplicacin y el uso dinmico de los recursos de la FPGA. El diseo en FPGAs permite la utilizacin de mdulos hardware especficamente creados para una aplicacin concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propsito general. Adems, algunas FPGAs permiten la reconfiguracin dinmica y parcial de ciertas partes de su lgica en tiempo de ejecucin, lo cual dota al diseo de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de aadir bloques prediseados y poder formar sistemas en chip de una forma ms o menos directa. Sin embargo, la forma en la que estos mdulos hardware estn organizados dentro de la arquitectura interna ya sea esttica o dinmicamente, o la forma en la que la informacin se intercambia entre ellos, influye enormemente en la capacidad de cmputo y eficiencia energtica del sistema. De la misma forma, la capacidad de cargar mdulos hardware bajo demanda, permite aadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseo de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseo de un bloque hardware no es slo su propio diseo, sino tambin el diseo de sus interfaces, y en algunos casos de los drivers software para su manejo. Adems, al aadir ms bloques, el espacio de diseo se hace ms complejo, y su programacin ms difcil. Aunque la mayora de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en ingls, Intelectual Property) comerciales y plantillas para ayudar al diseo de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestin dinmica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computacin, consumo energtico y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solucin de compromiso entre estos tres parmetros. Mediante el uso de la reconfiguracin dinmica y parcial y una mejora en la transmisin de los datos entre la memoria principal y los aceleradores, es posible dedicar un nmero variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variacin en el tiempo del nmero de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleracin, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero ptimo de recursos para una tarea mejora el consumo energtico ya que bien es posible disminuir la potencia instantnea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos lmites lgicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el cdigo de aplicacin. A este respecto, se incluyen distintos niveles de transparencia: Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el cdigo de aplicacin sufra ningn cambio. Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el cdigo de aplicacin. Transparencia a la replicacin: es posible usar mltiples instancias de un mismo mdulo bien para aadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el cdigo de aplicacin cambie. Transparencia a la posicin: la posicin fsica de los mdulos hardware es arbitraria para su direccionamiento desde el cdigo de aplicacin. Transparencia a los fallos: si existe un fallo en un mdulo hardware, gracias a la redundancia, el cdigo de aplicacin tomar directamente el resultado correcto. Transparencia a la concurrencia: el hecho de que una tarea sea realizada por ms o menos bloques es transparente para el cdigo que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos lneas diferentes. En primer lugar, con el diseo de la plataforma HiReCookie y en segundo lugar con el diseo de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuacin. Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o Divisin de la arquitectura en distintas islas de alimentacin. o Implementacin de los diversos modos de bajo consumo y polticas de despertado del nodo. o Creacin de un archivo de configuracin de la FPGA comprimido para reducir el tiempo y el consumo de la configuracin inicial. Diseo de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computacin y modos de ejecucin inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un nmero variable de bloques de hilos por cada unidad de ejecucin. o Estructura para optimizar las transacciones de datos en rfaga proporcionando datos en cascada o en paralelo a los distinto mdulos incluyendo un proceso de votado por mayora y operaciones de reduccin. o Capa de abstraccin entre el procesador principal que incluye el cdigo de aplicacin y los recursos asignados para las diferentes tareas. o Arquitectura de los mdulos hardware reconfigurables para mantener la escalabilidad aadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterizacin online de las tareas para proporcionar informacin a un mdulo de gestin de recursos para mejorar la operacin en trminos de energa y procesamiento cuando adems se opera entre distintos nieles de tolerancia a fallos. El documento est dividido en dos partes principales formando un total de cinco captulos. En primer lugar, despus de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseo de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energtico y se muestran casos de uso de la plataforma as como pruebas de validacin del diseo. La segunda parte del documento describe la arquitectura reconfigurable, su implementacin en varias FPGAs, y pruebas de validacin en trminos de capacidad de procesamiento y consumo energtico, incluyendo cmo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los captulos a lo largo del documento son los siguientes: El captulo 1 analiza los principales objetivos, motivacin y aspectos tericos necesarios para seguir el resto del documento. El captulo 2 est centrado en el diseo de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energa. El captulo 3 describe la arquitectura reconfigurable ARTICo3. El captulo 4 se centra en las pruebas de validacin de la arquitectura usando la plataforma HiReCookie para la mayora de los tests. Un ejemplo de aplicacin es mostrado para analizar el funcionamiento de la arquitectura. El captulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y lneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. Performance transparency: the system must reconfigure itself as load changes. Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. Location transparency: resources are accessed with no knowledge of their location by the application code. Failure transparency: task must be completed despite a failure in some components. Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or signicant nancial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Javas popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specication for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) prole was developed to dene a robust subset of the language that is amenable to static analysis for high-integrity system certication. Currently, a specication under the Java community process (JSR- 302) is being developed. Its purpose is to dene those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its proles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to dene appropriate abstractions to overcome this problem. Currently there is no formal specication. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ prole. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the denition of a computational model which identies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by dening two phases and a specic threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol dening a network interface and modules. The JRMP protocol was modied to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez ms importantes para la sociedad. Su demanda aumenta y cada vez ms dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar prdida de vidas humanas, daos en el medio ambiente o cuantiosas prdidas econmicas. La necesidad de satisfacer requisitos temporales estrictos, hace ms complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso tcnicas adecuadas en su diseo, mantenimiento y certicacin. En concreto, se requiere una tecnologa exible e independiente del hardware. La evolucin de las redes y paradigmas de comunicacin, as como la necesidad de mayor potencia de cmputo y de tolerancia a fallos, ha motivado la interconexin de dispositivos electrnicos. Los mecanismos de comunicacin permiten la transferencia de datos con alta velocidad de transmisin. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecucin. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseo. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar donde se ejecutan puede variar. El lenguaje de programacin Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specication for Java), que es una extensin del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitacin importante dado que la mayora de los actuales y futuros sistemas sern distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el n de denir las abstracciones que aborden dicha limitacin, pero en la actualidad aun no existe una especicacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integracin entre el modelo de RMI (Remote Method Invocation) y el perl HRTJ. Ha sido diseado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la conabilidad del comportamiento temporal y el uso de recursos. El diseo parte de la denicin de un modelo computacional el cual identica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes ms adecuados, el modelo de anlisis, y un subconjunto de Java para sistemas de tiempo real crtico. En el diseo, las referencias remotas son el medio bsico para construccin de aplicaciones distribuidas las cuales son asociadas a todos los parmetros no funcionales y los recursos necesarios para la ejecucin de invocaciones remotas sncronas o asncronas con atributos de tiempo real. El middleware propuesto separa la asignacin de recursos de la propia ejecucin deniendo dos fases y un mecanismo de hebras especico que garantiza un comportamiento temporal adecuado. Adems se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red deniendo una interfaz de red y mdulos especcos. Tambin se ha modicado el protocolo JRMP para incluir diferentes fases, parmetros no funcionales y optimizaciones de los tamaos de los mensajes. Aunque la serializacin es una de las operaciones fundamentales para asegurar la adecuada transmisin de datos, las actuales implementaciones no son adecuadas para sistemas crticos y no hay alternativas. Este trabajo propone una serializacin predecible que ha implicado el desarrollo de un nuevo compilador para la generacin de cdigo optimizado acorde al modelo computacional. La solucin propuesta tiene la ventaja que en tiempo de compilacin nos permite planicar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseo e implementacin se ha llevado a cabo un exigente proceso de validacin con nfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicacin industrial desarrollada por Thales Avionics (un sistema de gestin de vuelo) y en las pruebas exhaustivas han demostrado que el diseo y el prototipo son ables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Getting a lower energy cost has always been a challenge for concentrated photovoltaic. The FK concentrator enhances the performance (efficiency, acceptance angle and manufacturing tolerances) of the conventional CPV system based on a Fresnel primary stage and a secondary lens, while keeping its simplicity and potentially lowcost manufacturing. At the same time FXTP (Fresnel lens+reflective prism), at the first glance has better cost potential but significantly higher sensitivity to manufacturing errors. This work presents comparison of these two approaches applied to two main technologies of Fresnel lens production (PMMA and Silicone on Glass) and effect of standard deformations that occur under real operation conditions
Resumo:
La mayora de las estructuras de hormign pretensadas construidas en los ltimos 50 aos han demostrado una excelente durabilidad cuando su construccin se realiza atendiendo las recomendaciones de un buen diseo as como una buena ejecucin y puesta en obra de la estructura. Este hecho se debe en gran parte al temor que despierta el fenmeno de la corrosin bajo tensin tpico de las armaduras de acero de alta resistencia. Menos atencin se ha prestado a la susceptibilidad a la corrosin bajo tensin de los anclajes de postensado, posiblemente debido a que se han reportado pocos casos de fallos catastrficos. El concepto de Tolerancia al Dao y la Mecnica de la Fractura en estructuras de Ingeniera Civil ha empezado a incorporarse recientemente en algunas normas de diseo y clculo de estructuras metlicas, sin embargo, an est lejos de ser asimilado y empleado habitualmente por los ingenieros en sus clculos cuando la ocasin lo requiere. Este desconocimiento de los aspectos relacionados con la Tolerancia al Dao genera importantes gastos de mantenimiento y reparacin. En este trabajo se ha estudiado la aplicabilidad de los conceptos de la Mecnica de la Fractura a los componentes de los sistemas de postensado empleados en ingeniera civil, emplendolo para analizar la susceptibilidad de las armaduras activas frente a la corrosin bajo tensiones y a la prdida de capacidad portante de las cabezas de anclajes de postensado debido a la presencia de defectos. Con este objeto se han combinado tanto tcnicas experimentales como numricas. Los defectos superficiales en los alambres de pretensado no se presentan de manera aislada si no que existe una cierta continuidad en la direccin axial as como un elevado nmero de defectos. Por este motivo se ha optado por un enfoque estadstico, que es ms apropiado que el determinstico. El empleo de modelos estadsticos basados en la teora de valores extremos ha permitido caracterizar el estado superficial en alambres de 5,2 mm de dimetro. Por otro lado la susceptibilidad del alambre frente a la corrosin bajo tensin ha sido evaluada mediante la realizacin de una campaa de ensayos de acuerdo con la actual normativa que ha permitido caracterizar estadsticamente su comportamiento. A la vista de los resultados ha sido posible evaluar como los parmetros que definen el estado superficial del alambre pueden determinar la durabilidad de la armadura atendiendo a su resistencia frente a la corrosin bajo tensin, evaluada mediante los ensayos que especifica la normativa. En el caso de las cabezas de anclaje de tendones de pretensado, los defectos se presentan de manera aislada y tienen su origen en marcas, araazos o picaduras de corrosin que pueden producirse durante el proceso de fabricacin, transporte, manipulacin o puesta en obra. Dada la naturaleza de los defectos, el enfoque determinstico es ms apropiado que el estadstico. La evaluacin de la importancia de un defecto en un elemento estructural requiere la estimacin de la solicitacin local que genera el defecto, que permite conocer si el defecto es crtico o si puede llegar a serlo, si es que progresa con el tiempo (por fatiga, corrosin, una combinacin de ambas, etc.). En este trabajo los defectos han sido idealizados como grietas, de manera que el anlisis quedara del lado de la seguridad. La evaluacin de la solicitacin local del defecto ha sido calculada mediante el empleo de modelos de elementos finitos de la cabeza de anclaje que simulan las condiciones de trabajo reales de la cabeza de anclaje durante su vida til. A partir de estos modelos numricos se ha analizado la influencia en la carga de rotura del anclaje de diversos factores como la geometra del anclaje, las condiciones del apoyo, el material del anclaje, el tamao del defecto su forma y su posicin. Los resultados del anlisis numrico han sido contrastados satisfactoriamente mediante la realizacin de una campaa experimental de modelos a escala de cabezas de anclaje de Polimetil-metacrilato en los que artificialmente se han introducido defectos de diversos tamaos y en distintas posiciones. ABSTRACT Most of the prestressed concrete structures built in the last 50 years have demonstrated an excellent durability when they are constructed in accordance with the rules of good design, detailing and execution. This is particularly true with respect to the feared stress corrosion cracking, which is typical of high strength prestressing steel wires. Less attention, however, has been paid to the stress corrosion cracking susceptibility of anchorages for steel tendons for prestressing concrete, probably due to the low number of reported failure cases. Damage tolerance and fracture mechanics concepts in civil engineering structures have recently started to be incorporated in some design and calculation rules for metallic structures, however it is still far from being assimilated and used by civil engineers in their calculations on a regular basis. This limited knowledge of the damage tolerance basis could lead to significant repair and maintenance costs. This work deals with the applicability of fracture mechanics and damage tolerance concepts to the components of prestressed systems, which are used in civil engineering. Such concepts have been applied to assess the susceptibility of the prestressing steel wires to stress corrosion cracking and the reduction of load bearing capability of anchorage devices due to the presence of defects. For this purpose a combination of experimental work and numerical techniques have been performed. Surface defects in prestressing steel wires are not shown alone, though a certain degree of continuity in the axial direction exist. A significant number of such defects is also observed. Hence a statistical approach was used, which is assumed to be more appropriate than the deterministic approach. The use of statistical methods based in extreme value theories has allowed the characterising of the surface condition of 5.2 mm-diameter wires. On the other hand the stress corrosion cracking susceptibility of the wire has been assessed by means of an experimental testing program in line with the current regulations, which has allowed statistical characterisasion of their performances against stress corrosion cracking. In the light of the test results, it has been possible to evaluate how the surface condition parameters could determine the durability of the active metal armour regarding to its resistance against stress corrosion cracking assessed by means of the current testing regulations. In the case of anchorage devices for steel tendons for prestressing concrete, the damage is presented as point defects originating from dents, scratches or corrosion pits that could be produced during the manufacturing proccess, transport, handling, assembly or use. Due to the nature of these defects, in this case the deterministic approach is more appropriate than the statistical approach. The assessment of the relevancy of defect in a structural component requires the computation of the stress intensity factors, which in turn allow the evaluation of whether the size defect is critical or could become critical with the progress of time (due to fatigue, corrosion or a combination of both effects). In this work the damage is idealised as tiny cracks, a conservative hypothesis. The stress intensity factors have been calculated by means of finite element models of the anchorage representing the real working conditions during its service life. These numeric models were used to assess the impact of some factors on the rupture load of the anchorage, such the anchorage geometry, material, support conditions, defect size, shape and its location. The results from the numerical analysis have been succesfully correlated against the results of the experimental testing program of scaled models of the anchorages in poly-methil methacrylate in which artificial damage in several sizes and locations were introduced.
Resumo:
La implantacin de las tecnologas Internet ha permitido la extensin del uso de estrategias e-manufacturing y el desarrollo de herramientas para la recopilacin, transformacin y sincronizacin de datos de fabricacin va web. En este mbito, un rea de potencial desarrollo es la extensin del virtual manufacturing a los procesos de Performance Management (PM), rea crtica para la toma de decisiones y ejecucin de acciones de mejora en fabricacin. Este trabajo doctoral propone un Arquitectura de Informacin para el desarrollo de herramientas virtuales en el mbito PM. Su aplicacin permite asegurar la interoperabilidad necesaria en los procesos de tratamiento de informacin de toma de decisin. Est formado por tres sub-sistemas: un modelo conceptual, un modelo de objetos y un marco Web compuesto de una plataforma de informacin y una arquitectura de servicios Web (WS). El modelo conceptual y el modelo de objetos se basa en el desarrollo de toda la informacin que se necesita para definir y obtener los diferentes indicadores de medida que requieren los procesos PM. La plataforma de informacin hace uso de las tecnologas XML y B2MML para estructurar un nuevo conjunto de esquemas de mensajes de intercambio de medicin de rendimiento (PMXML). Esta plataforma de informacin se complementa con una arquitectura de servicios web que hace uso de estos esquemas para integrar los procesos de codificacin, decodificacin, traduccin y evaluacin de los performance key indicators (KPI). Estos servicios realizan todas las transacciones que permiten transformar los datos origen en informacin inteligente usable en los procesos de toma de decisin. Un caso prctico de intercambio de datos en procesos de medicin del rea de mantenimiento de equipos es mostrado para verificar la utilidad de la arquitectura. ABSTRAC The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronizing manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to Performance Measurement (PM) processes, a critical area for decision-making and implementing improvement actions in manufacturing. This thesis proposes a Information Architecture to integrate decision support systems in e-manufacturing. Specifically, the proposed architecture offers a homogeneous PM information exchange model that can be applied trough decision support in emanufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a object model and Web Framework which is composed by a PM information platform and PM-Web services architecture. . The data model and the object model are based on developing all the information required to define and acquire the different indicators required by PM processes. The PM information platform uses XML and B2MML technologies to structure a new set of performance measurement exchange message schemas (PM-XML). This PM information platform is complemented by a PM-Web Services architecture that uses these schemas to integrate the coding, decoding, translation and assessment processes of the key performance indicators (KPIs). These services perform all the transactions that enable the source data to be transformed into smart data that can be used in the decision-making processes. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the architecture.
Resumo:
We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking.
Resumo:
Proof carrying code (PCC) is a general is originally a roof in rst-order logic of certain vermethodology for certifying that the execution of an un- ification onditions and the checking process involves trusted mobile code is safe. The baste idea is that the ensuring that the certifcate is indeed a valid rst-order code supplier attaches a certifcate to the mobile code proof. which the consumer checks in order to ensure that the The main practical difculty of PCC techniques is in code is indeed safe. The potential benefit is that the generating safety certieates which at the same time: i) consumer's task is reduced from the level of proving to allow expressing interesting safety properties, ii) can be the level of checking. Recently, the abstract interpre- generated automatically and, iii) are easy and efficient tation techniques developed, in logic programming have to check. In [1], the abstract interpretation techniques been proposed as a basis for PCC. This extended ab- [5] developed in logic programming1 are proposed as stract reports on experiments which illustrate several is- a basis for PCC. They offer a number of advantages sues involved in abstract interpretation-based certifica- for dealing with the aforementioned issues. In particution. First, we describe the implementation of our sys- lar, the xpressiveness of existing abstract domains will tem in the context of CiaoPP: the preprocessor of the be implicitly available in abstract interpretation-based Ciao multi-paradigm programming system. Then, by code certification to dene a wide range of safety propermeans of some experiments, we show how code certifi- ties. Furthermore, the approach inherits the automation catin is aided in the implementation of the framework. and inference power of the abstract interpretation en- Finally, we discuss the application of our method within gines used in (Constraint) Logic Programming, (C)LP. the rea, of pervasive systems
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (Le., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external datbase. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (i.e., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external database. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas.
Resumo:
Incorporating the possibility of attaching attributes to variables in a logic programming system has been shown to allow the addition of general constraint solving capabilities to it. This approach is very attractive in that by adding a few primitives any logic programming system can be turned into a generic constraint logic programming system in which constraint solving can be user dened, and at source level - an extreme example of the "glass box" approach. In this paper we propose a different and novel use for the concept of attributed variables: developing a generic parallel/concurrent (constraint) logic programming system, using the same "glass box" flavor. We arge that a system which implements attributed variables and a few additional primitives can be easily customized at source level to implement many of the languages and execution models of parallelism and concurrency currently proposed, in both shared memory and distributed systems. We illustrate this through examples and report on an implementation of our ideas.