825 resultados para FAULT TOLERANCE
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
Negli ultimi anni le Web application stanno assumendo un ruolo sempre più importante nella vita di ognuno di noi. Se fino a qualche anno fa eravamo abituati ad utilizzare quasi solamente delle applicazioni “native”, che venivano eseguite completamente all’interno del nostro Personal Computer, oggi invece molti utenti utilizzano i loro vari dispositivi quasi esclusivamente per accedere a delle Web application. Grazie alle applicazioni Web si sono potuti creare i cosiddetti social network come Facebook, che sta avendo un enorme successo in tutto il mondo ed ha rivoluzionato il modo di comunicare di molte persone. Inoltre molte applicazioni più tradizionali come le suite per ufficio, sono state trasformate in applicazioni Web come Google Docs, che aggiungono per esempio la possibilità di far lavorare più persone contemporanemente sullo stesso documento. Le Web applications stanno assumendo quindi un ruolo sempre più importante, e di conseguenza sta diventando fondamentale poter creare delle applicazioni Web in grado di poter competere con le applicazioni native, che siano quindi in grado di svolgere tutti i compiti che sono stati sempre tradizionalmente svolti dai computer. In questa Tesi ci proporremo quindi di analizzare le varie possibilità con le quali poter migliorare le applicazioni Web, sia dal punto di vista delle funzioni che esse possono svolgere, sia dal punto di vista della scalabilità. Dato che le applicazioni Web moderne hanno sempre di più la necessità di poter svolgere calcoli in modo concorrente e distribuito, analizzeremo un modello computazionale che si presta particolarmente per progettare questo tipo di software: il modello ad Attori. Vedremo poi, come caso di studio di framework per la realizzazione di applicazioni Web avanzate, il Play framework: esso si basa sulla piattaforma Akka di programmazione ad Attori, e permette di realizzare in modo semplice applicazioni Web estremamente potenti e scalabili. Dato che le Web application moderne devono avere già dalla nascita certi requisiti di scalabilità e fault tolerance, affronteremo il problema di come realizzare applicazioni Web predisposte per essere eseguite su piattaforme di Cloud Computing. In particolare vedremo come pubblicare una applicazione Web basata sul Play framework sulla piattaforma Heroku, un servizio di Cloud Computing PaaS.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
I Big Data hanno forgiato nuove tecnologie che migliorano la qualità della vita utilizzando la combinazione di rappresentazioni eterogenee di dati in varie discipline. Occorre, quindi, un sistema realtime in grado di computare i dati in tempo reale. Tale sistema viene denominato speed layer, come si evince dal nome si è pensato a garantire che i nuovi dati siano restituiti dalle query funcions con la rapidità in cui essi arrivano. Il lavoro di tesi verte sulla realizzazione di un’architettura che si rifaccia allo Speed Layer della Lambda Architecture e che sia in grado di ricevere dati metereologici pubblicati su una coda MQTT, elaborarli in tempo reale e memorizzarli in un database per renderli disponibili ai Data Scientist. L’ambiente di programmazione utilizzato è JAVA, il progetto è stato installato sulla piattaforma Hortonworks che si basa sul framework Hadoop e sul sistema di computazione Storm, che permette di lavorare con flussi di dati illimitati, effettuando l’elaborazione in tempo reale. A differenza dei tradizionali approcci di stream-processing con reti di code e workers, Storm è fault-tolerance e scalabile. Gli sforzi dedicati al suo sviluppo da parte della Apache Software Foundation, il crescente utilizzo in ambito di produzione di importanti aziende, il supporto da parte delle compagnie di cloud hosting sono segnali che questa tecnologia prenderà sempre più piede come soluzione per la gestione di computazioni distribuite orientate agli eventi. Per poter memorizzare e analizzare queste moli di dati, che da sempre hanno costituito una problematica non superabile con i database tradizionali, è stato utilizzato un database non relazionale: HBase.
Resumo:
Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
The popularity of MapReduce programming model has increased interest in the research community for its improvement. Among the other directions, the point of fault tolerance, concretely the failure detection issue seems to be a crucial one, but that until now has not reached its satisfying level. Motivated by this, I decided to devote my main research during this period into having a prototype system architecture of MapReduce framework with a new failure detection service, containing both analytical (theoretical) and implementation part. I am confident that this work should lead the way for further contributions in detecting failures to any NoSQL App frameworks, and cloud storage systems in general.
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.
Resumo:
Los sistemas técnicos son cada vez más complejos, incorporan funciones más avanzadas, están más integrados con otros sistemas y trabajan en entornos menos controlados. Todo esto supone unas condiciones más exigentes y con mayor incertidumbre para los sistemas de control, a los que además se demanda un comportamiento más autónomo y fiable. La adaptabilidad de manera autónoma es un reto para tecnologías de control actualmente. El proyecto de investigación ASys propone abordarlo trasladando la responsabilidad de la capacidad de adaptación del sistema de los ingenieros en tiempo de diseño al propio sistema en operación. Esta tesis pretende avanzar en la formulación y materialización técnica de los principios de ASys de cognición y auto-consciencia basadas en modelos y autogestión de los sistemas en tiempo de operación para una autonomía robusta. Para ello el trabajo se ha centrado en la capacidad de auto-conciencia, inspirada en los sistemas biológicos, y se ha explorado la posibilidad de integrarla en la arquitectura de los sistemas de control. Además de la auto-consciencia, se han explorado otros temas relevantes: modelado funcional, modelado de software, tecnología de los patrones, tecnología de componentes, tolerancia a fallos. Se ha analizado el estado de la técnica en los ámbitos pertinentes para las cuestiones de la auto-consciencia y la adaptabilidad en sistemas técnicos: arquitecturas cognitivas, control tolerante a fallos, y arquitecturas software dinámicas y computación autonómica. El marco teórico de ASys existente de sistemas autónomos cognitivos ha sido adaptado para servir de base para este análisis de autoconsciencia y adaptación y para dar sustento conceptual al posterior desarrollo de la solución. La tesis propone una solución general de diseño para la construcción de sistemas autónomos auto-conscientes. La idea central es la integración de un meta-controlador en la arquitectura de control del sistema autónomo, capaz de percibir la estado funcional del sistema de control y, si es necesario, reconfigurarlo en tiempo de operación. Esta solución de metacontrol se ha formalizado en cuatro patrones de diseño: i) el Patrón Metacontrol, que define la integración de un subsistema de metacontrol, responsable de controlar al propio sistema de control a través de la interfaz proporcionada por su plataforma de componentes, ii) el patrón Bucle de Control Epistémico, que define un bucle de control cognitivo basado en el modelos y que se puede aplicar al diseño del metacontrol, iii) el patrón de Reflexión basada en Modelo Profundo propone una solución para construir el modelo ejecutable utilizado por el meta-controlador mediante una transformación de modelo a modelo a partir del modelo de ingeniería del sistema, y, finalmente, iv) el Patrón Metacontrol Funcional, que estructura el meta-controlador en dos bucles, uno para el control de la configuración de los componentes del sistema de control, y otro sobre éste, controlando las funciones que realiza dicha configuración de componentes; de esta manera las consideraciones funcionales y estructurales se desacoplan. La Arquitectura OM y el metamodelo TOMASys son las piezas centrales del marco arquitectónico desarrollado para materializar la solución compuesta de los patrones anteriores. El metamodelo TOMASys ha sido desarrollado para la representación de la estructura y su relación con los requisitos funcionales de cualquier sistema autónomo. La Arquitectura OM es un patrón de referencia para la construcción de una metacontrolador integrando los patrones de diseño propuestos. Este meta-controlador se puede integrar en la arquitectura de cualquier sistema control basado en componentes. El elemento clave de su funcionamiento es un modelo TOMASys del sistema decontrol, que el meta-controlador usa para monitorizarlo y calcular las acciones de reconfiguración necesarias para adaptarlo a las circunstancias en cada momento. Un proceso de ingeniería, complementado con otros recursos, ha sido elaborado para guiar la aplicación del marco arquitectónico OM. Dicho Proceso de Ingeniería OM define la metodología a seguir para construir el subsistema de metacontrol para un sistema autónomo a partir del modelo funcional del mismo. La librería OMJava proporciona una implementación del meta-controlador OM que se puede integrar en el control de cualquier sistema autónomo, independientemente del dominio de la aplicación o de su tecnología de implementación. Para concluir, la solución completa ha sido validada con el desarrollo de un robot móvil autónomo que incorpora un meta-controlador con la Arquitectura OM. Las propiedades de auto-consciencia y adaptación proporcionadas por el meta-controlador han sido validadas en diferentes escenarios de operación del robot, en los que el sistema era capaz de sobreponerse a fallos en el sistema de control mediante reconfiguraciones orquestadas por el metacontrolador. ABSTRACT Technical systems are becoming more complex, they incorporate more advanced functionalities, they are more integrated with other systems and they are deployed in less controlled environments. All this supposes a more demanding and uncertain scenario for control systems, which are also required to be more autonomous and dependable. Autonomous adaptivity is a current challenge for extant control technologies. The ASys research project proposes to address it by moving the responsibility for adaptivity from the engineers at design time to the system at run-time. This thesis has intended to advance in the formulation and technical reification of ASys principles of model-based self-cognition and having systems self-handle at runtime for robust autonomy. For that it has focused on the biologically inspired capability of self-awareness, and explored the possibilities to embed it into the very architecture of control systems. Besides self-awareness, other themes related to the envisioned solution have been explored: functional modeling, software modeling, patterns technology, components technology, fault tolerance. The state of the art in fields relevant for the issues of self-awareness and adaptivity has been analysed: cognitive architectures, fault-tolerant control, and software architectural reflection and autonomic computing. The extant and evolving ASys Theoretical Framework for cognitive autonomous systems has been adapted to provide a basement for this selfhood-centred analysis and to conceptually support the subsequent development of our solution. The thesis proposes a general design solution for building self-aware autonomous systems. Its central idea is the integration of a metacontroller in the control architecture of the autonomous system, capable of perceiving the functional state of the control system and reconfiguring it if necessary at run-time. This metacontrol solution has been formalised into four design patterns: i) the Metacontrol Pattern, which defines the integration of a metacontrol subsystem, controlling the domain control system through an interface provided by its implementation component platform, ii) the Epistemic Control Loop pattern, which defines a modelbased cognitive control loop that can be applied to the design of such a metacontroller, iii) the Deep Model Reflection pattern proposes a solution to produce the online executable model used by the metacontroller by model-to-model transformation from the engineering model, and, finally, iv) the Functional Metacontrol pattern, which proposes to structure the metacontroller in two loops, one for controlling the configuration of components of the controller, and another one on top of the former, controlling the functions being realised by that configuration; this way the functional and structural concerns become decoupled. The OM Architecture and the TOMASys metamodel are the core pieces of the architectural framework developed to reify this patterned solution. The TOMASys metamodel has been developed for representing the structure and its relation to the functional requirements of any autonomous system. The OM architecture is a blueprint for building a metacontroller according to the patterns. This metacontroller can be integrated on top of any component-based control architecture. At the core of its operation lies a TOMASys model of the control system. An engineering process and accompanying assets have been constructed to complete and exploit the architectural framework. The OM Engineering Process defines the process to follow to develop the metacontrol subsystem from the functional model of the controller of the autonomous system. The OMJava library provides a domain and application-independent implementation of an OM Metacontroller than can be used in the implementation phase of OMEP. Finally, the complete solution has been validated in the development of an autonomous mobile robot that incorporates an OM metacontroller. The functional selfawareness and adaptivity properties achieved thanks to the metacontrol system have been validated in different scenarios. In these scenarios the robot was able to overcome failures in the control system thanks to reconfigurations performed by the metacontroller.
Resumo:
Collaborative hardening and hardware redundancy are nowadays the most interesting solutions in terms of fault tolerance achieved and low extra cost imposed to the project budget. Thanks to the powerful and cheap digital devices that are available in the market, extra processing capabilities can be used for redundant tasks, not only in early data processing (sensed data) but also in routing and interfacing1
Resumo:
Recently there has been an important increase in electric equipment, as well as, electric power demand in aircrafts applications. This prompts to the necessity of efficient, reliable, and low-weight converters, especially rectifiers from 115VAC to 270VDC because these voltages are used in power distribution. In order to obtain a high efficiency, in aircraft application where the derating in semiconductors is high, normally several semiconductors are used in parallel to decrease the conduction losses. However, this is in conflict with high reliability. To match both goals of high efficiency and reliability, this work proposes an interleaved multi-cell rectifier system, employing several converter cells in parallel instead of parallel-connected semiconductors. In this work a 10kW multi-cell isolated rectifier system has been designed where each cell is composed of a buck type rectifier and a full bridge DC-DC converter. The implemented system exhibits 91% of efficiency, high power density (10kW/10kg), low THD (2.5%), and n−1 fault tolerance which complies, with military aircraft standards.
Resumo:
In this paper, an architecture based on a scalable and flexible set of Evolvable Processing arrays is presented. FPGA-native Dynamic Partial Reconfiguration (DPR) is used for evolution, which is done intrinsically, letting the system to adapt autonomously to variable run-time conditions, including the presence of transient and permanent faults. The architecture supports different modes of operation, namely: independent, parallel, cascaded or bypass mode. These modes of operation can be used during evolution time or during normal operation. The evolvability of the architecture is combined with fault-tolerance techniques, to enhance the platform with self-healing features, making it suitable for applications which require both high adaptability and reliability. Experimental results show that such a system may benefit from accelerated evolution times, increased performance and improved dependability, mainly by increasing fault tolerance for transient and permanent faults, as well as providing some fault identification possibilities. The evolvable HW array shown is tailored for window-based image processing applications.