981 resultados para Embedded systems
Resumo:
Los sistemas empotrados son cada da ms comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos especficamente a esta clase de sistemas es ms necesario que nunca. A diferencia de lo que ocurra hasta hace poco, en la actualidad los avances tecnolgicos en el campo de los microprocesadores de los ltimos tiempos permiten el desarrollo de equipos con prestaciones ms que suficientes para ejecutar varios sistemas software en una nica mquina. Adems, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones econmicas. Estos sistemas software se disean e implementan de acuerdo con unos estndares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria tambin la certificacin del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicacin ms crtica, lo que hace que los costes se disparen. La virtualizacin se ha postulado como una tecnologa muy interesante para contener esos costes. Esta tecnologa permite que un conjunto de mquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada particin pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las caractersticas de la particin donde su aplicacin va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporacin de una nueva particin a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnologa actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseado e implementado un entorno dirigido especficamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Adems, el diseo del entorno de desarrollo se ha basado en la ingeniera guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. As pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, as como para validar los resultados y generar los artefactos necesarios para el compilado, construccin y despliegue del mismo. Adems, en el diseo del entorno de desarrollo, la extensin e integracin del mismo con herramientas de validacin ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generacin de nuevos artefactos tales como documentacin o diferentes lenguajes de programacin, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseado para ser independiente de los requisitos de las aplicaciones as como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirn en el sistema particionado que resulte de su ejecucin. Las restricciones al particionado se han diseado con una capacidad expresiva suficiente para que, con un pequeo grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales ms comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicacin. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecucin y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada particin define qu aplicaciones deben ejecutar en ella as como los recursos que necesita la particin para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemticamente a travs de grafos coloreados. En dichos grafos, un coloreado propio de los vrtices representa un particionado del sistema correcto. El algoritmo se ha diseado tambin para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con xito en dos casos de uso industriales: el satlite UPMSat-2 y un demostrador del sistema de control de una turbina elica. Adems, el algoritmo se ha validado mediante la ejecucin de numerosos escenarios sintticos, incluyendo algunos muy complejos, de ms de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento especficos que realizan una tarea fija durante toda su vida til. Para cumplir con requisitos estrictos de coste, tamao y peso, el equipo de diseo debe optimizar su funcionamiento para condiciones muy especficas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento ms inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operacin cada vez ms dinmicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseo tales como: cambios en las caractersticas de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cmputo, por ejemplo debido a fallos o defectos de fabricacin; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinmicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptacin autnoma sin intervencin humana a lo largo de la vida til, permitiendo que tomen sus propias decisiones en tiempo de ejecucin. stos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras caractersticas, las de auto-configuracin, auto-optimizacin y auto-reparacin. Tpicamente, la parte soft de un sistema es mayoritariamente la nica utilizada para proporcionar algunas capacidades de adaptacin a un sistema. Sin embargo, la proporcin rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones est siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia tambin aumente. Adems, la mejora en metodologas de diseo no ha sido acorde como para poder utilizar toda la capacidad de cmputo disponible proporcionada por los ncleos. Por todo ello, no se estn satisfaciendo adecuadamente las demandas de cmputo que imponen las nuevas aplicaciones. La solucin tradicional para mejorar la proporcin rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de produccin en masa y adems la naturaleza esttica de su estructura complica la solucin a las necesidades de adaptacin. Los avances en tecnologas de fabricacin han hecho que la FPGA, una vez lenta y pequea, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cmputo reconfigurable de gran potencia, con una cantidad enorme de recursos lgicos computacionales y cores hardware empotrados de procesamiento de seal y de propsito general. Sus capacidades de reconfiguracin han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado ms como esttico. El motivo es que como en el caso de las FPGAs basadas en tecnologa SRAM, la reconfiguracin parcial dinmica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecucin mientras el resto permanecen activos. Adems, este proceso de reconfiguracin puede ser ejecutado internamente por el propio dispositivo. El avance tecnolgico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computacin Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicacin ms exticos y menos convencionales que ha posibilitado la computacin reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a travs de reconfiguracin en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biolgicas naturales, que gua la direccin del cambio. Es una aplicacin ms del campo de la Computacin Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimizacin global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolucin de problemas. En analoga al proceso biolgico de la evolucin, en el hardware evolutivo el sujeto de la evolucin es una poblacin de circuitos que intenta adaptarse a su entorno mediante una adecuacin progresiva generacin tras generacin. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuacin (o fitness) despus de ser evaluados, y usndolos como padres de la siguiente generacin, el algoritmo evolutivo crea una nueva poblacin hija usando operadores genticos como la mutacin y la recombinacin. Segn se van sucediendo generaciones, se espera que la poblacin en conjunto se aproxime a la solucin ptima al problema de encontrar una configuracin del circuito adecuada que satisfaga las especificaciones. El estado de la tecnologa de reconfiguracin despus de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos pblicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguracin lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los aos 2000 permiti mantener la investigacin en el campo mientras la tecnologa de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que acta como un circuito reconfigurable de aplicacin especfica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguracin y aumenta su velocidad (comparada con la reconfiguracin nativa). Es un array de nodos computacionales especificados usando descripciones HDL estndar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a travs de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande acta como memoria de configuracin, por lo que la reconfiguracin del VRC es muy rpida ya que tan slo implica la escritura de este registro, el cual controla las seales de seleccin del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de rea debido a la implementacin simultnea de cada funcin en cada nodo del array ms los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento mxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigacin sobre sistemas auto-adaptativos. Combinar un sustrato de cmputo auto-reconfigurable capaz de ser modificado dinmicamente en tiempo de ejecucin con un algoritmo empotrado que proporcione una direccin de cambio, puede ayudar a satisfacer los requisitos de adaptacin autnoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis est por tanto dirigida a contribuir a la auto-adaptacin del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parmetros soft. De esta distincin, se derivan dos lineas de trabajo. Por un lado, auto-adaptacin paramtrica, y por otro auto-adaptacin estructural. El objetivo perseguido en el caso de la auto-adaptacin paramtrica es la implementacin de tcnicas de optimizacin evolutiva complejas en sistemas empotrados con recursos limitados para la adaptacin paramtrica online de circuitos de procesamiento de seal. La aplicacin seleccionada como prueba de concepto es la optimizacin para tipos muy especficos de imgenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresin de imgenes. Por tanto, el objetivo requerido de la evolucin es una compresin adaptativa y ms eficiente comparada con los procedimientos estndar. El principal reto radica en reducir la necesidad de recursos de supercomputacin para el proceso de optimizacin propuesto en trabajos previos, de modo que se adece para la ejecucin en sistemas empotrados. En cuanto a la auto-adaptacin estructural, el objetivo de la tesis es la implementacin de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguracin nativas. En este caso, la prueba de concepto es la evolucin de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la deteccin de bordes en la imagen. En general, el objetivo es la evolucin en tiempo de ejecucin de tareas de procesamiento de imagen desconocidas en tiempo de diseo (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporacin de DPR en EHW para evolucionar la arquitectura de un array sistlico adaptable mediante reconfiguracin cuya capacidad de evolucin no haba sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptacin (AE, Adaptation Engine), un motor de reconfiguracin (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptacin paramtrica, la plataforma propuesta est caracterizada por: un CE caracterizado por un ncleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet un algoritmo evolutivo como AE que busca filtros wavelet candidatos a travs de un proceso de optimizacin paramtrica desarrollado especficamente para sistemas caracterizados por recursos de procesamiento limitados un nuevo operador de mutacin simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluacin rpida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la bsqueda evolutiva asociada a la adaptacin de wavelets. En el caso de adaptacin estructural, la plataforma propuesta toma la forma de: un CE basado en una plantilla de array sistlico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecucin un RE hardware que explota la capacidad de reconfiguracin nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecucin una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posicin, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: Una plataforma evolutiva basada en FPGA para la auto-adaptacin paramtrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptacin (AE) evolutivo y un motor de reconfiguracin (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptacin paramtrica y estructural. En cuanto a la auto-adaptacin paramtrica, las contribuciones principales son: Un motor computacional adaptable mediante registros que permite la adaptacin paramtrica de los coeficientes de una implementacin hardware adaptativa de un ncleo de DWT. Un motor de adaptacin basado en un algoritmo evolutivo desarrollado especficamente para optimizacin numrica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. Un ncleo IP de DWT auto-adaptativo en tiempo de ejecucin para sistemas empotrados que permite la optimizacin online del rendimiento de la transformada para compresin de imgenes en entornos especficos de despliegue, caracterizados por tipos diferentes de seal de entrada. Un modelo software y una implementacin hardware de una herramienta para la construccin evolutiva automtica de transformadas wavelet especficas. Por ltimo, en cuanto a la auto-adaptacin estructural, las contribuciones principales son: Un motor computacional adaptable mediante reconfiguracin nativa de FPGAs caracterizado por una plantilla de array sistlico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cmputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. Definicin de una biblioteca de elementos de procesamiento apropiada para la sntesis autnoma en tiempo de ejecucin de diferentes tareas de procesamiento de imagen. Incorporacin eficiente de la reconfiguracin parcial dinmica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo tambin se comparan originalmente los detalles de implementacin de ambas propuestas. Una plataforma tolerante a fallos, auto-curativa, que permite la recuperacin funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguracin, se hace un anlisis sistemtico de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. Una plataforma con calidad de filtrado dinmica que permite la adaptacin online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, tambin se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinmicamente. Este documento est organizado en cuatro partes y nueve captulos. La primera parte contiene el captulo 1, una introduccin y motivacin sobre este trabajo de tesis. A continuacin, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el captulo 2 contiene una introduccin a los conceptos de auto-adaptacin y computacin autonmica (autonomic computing) como un campo de investigacin ms general que el muy especfico de este trabajo; el captulo 3 introduce la computacin evolutiva como la tcnica para dirigir la adaptacin; el captulo 4 analiza las plataformas de computacin reconfigurables como la tecnologa para albergar hardware auto-adaptativo; y finalmente, el captulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el captulo 6 contiene una declaracin de los objetivos de la tesis y la descripcin de la propuesta en su conjunto, los captulos 7 y 8 abordan la auto-adaptacin paramtrica y estructural, respectivamente. Finalmente, el captulo 9 de la parte 4 concluye el trabajo y describe caminos de investigacin futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. Regarding parametric self-adaptation, main contributions are: A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. Lastly, regarding structural self-adaptation, main contributions are: A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.
Resumo:
Hardware/Software partitioning (HSP) is a key task for embedded system co-design. The main goal of this task is to decide which components of an application are to be executed in a general purpose processor (software) and which ones, on a specific hardware, taking into account a set of restrictions expressed by metrics. In last years, several approaches have been proposed for solving the HSP problem, directed by metaheuristic algorithms. However, due to diversity of models and metrics used, the choice of the best suited algorithm is an open problem yet. This article presents the results of applying a fuzzy approach to the HSP problem. This approach is more flexible than many others due to the fact that it is possible to accept quite good solutions or to reject other ones which do not seem good. In this work we compare six metaheuristic algorithms: Random Search, Tabu Search, Simulated Annealing, Hill Climbing, Genetic Algorithm and Evolutionary Strategy. The presented model is aimed to simultaneously minimize the hardware area and the execution time. The obtained results show that Restart Hill Climbing is the best performing algorithm in most cases.
Resumo:
Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Comunicacin presentada en las V Jornadas de Computacin Empotrada, Valladolid, 17-19 Septiembre 2014
Resumo:
This paper presents a vision that allows the combined use of model-driven engineering, run-time monitoring, and animation for the development and analysis of components in real-time embedded systems. Key building block in the tool environment supporting this vision is a highly-customizable code generation process. Customization is performed via a configuration specification which describes the ways in which input is provided to the component, the ways in which run-time execution information can be observed, and how these observations drive animation tools. The environment is envisioned to be suitable for different activities ranging from quality assurance to supporting certification, teaching, and outreach and will be built exclusively with open source tools to increase impact. A preliminary prototype implementation is described.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is power estimation. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Non Destructive Testing (NDT) and Structural Health Monitoring (SHM) are becoming essential in many application contexts, e.g. civil, industrial, aerospace etc., to reduce structures maintenance costs and improve safety. Conventional inspection methods typically exploit bulky and expensive instruments and rely on highly demanding signal processing techniques. The pressing need to overcome these limitations is the common thread that guided the work presented in this Thesis. In the first part, a scalable, low-cost and multi-sensors smart sensor network is introduced. The capability of this technology to carry out accurate modal analysis on structures undergoing flexural vibrations has been validated by means of two experimental campaigns. Then, the suitability of low-cost piezoelectric disks in modal analysis has been demonstrated. To enable the use of this kind of sensing technology in such non conventional applications, ad hoc data merging algorithms have been developed. In the second part, instead, imaging algorithms for Lamb waves inspection (namely DMAS and DS-DMAS) have been implemented and validated. Results show that DMAS outperforms the canonical Delay and Sum (DAS) approach in terms of image resolution and contrast. Similarly, DS-DMAS can achieve better results than both DMAS and DAS by suppressing artefacts and noise. To exploit the full potential of these procedures, accurate group velocity estimations are required. Thus, novel wavefield analysis tools that can address the estimation of the dispersion curves from SLDV acquisitions have been investigated. An image segmentation technique (called DRLSE) was exploited in the k-space to draw out the wavenumber profile. The DRLSE method was compared with compressive sensing methods to extract the group and phase velocity information. The validation, performed on three different carbon fibre plates, showed that the proposed solutions can accurately determine the wavenumber and velocities in polar coordinates at multiple excitation frequencies.
Resumo:
This Thesis wants to highlight the importance of ad-hoc designed and developed embedded systems in the implementation of intelligent sensor networks. As evidence four areas of application are presented: Precision Agriculture, Bioengineering, Automotive and Structural Health Monitoring. For each field is reported one, or more, smart device design and developing, in addition to on-board elaborations, experimental validation and in field tests. In particular, it is presented the design and development of a fruit meter. In the bioengineering field, three different projects are reported, detailing the architectures implemented and the validation tests conducted. Two prototype realizations of an inner temperature measurement system in electric motors for an automotive application are then discussed. Lastly, the HW/SW design of a Smart Sensor Network is analyzed: the network features on-board data management and processing, integration in an IoT toolchain, Wireless Sensor Network developments and an AI framework for vibration-based structural assessment.
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.
Resumo:
Database query languages on relations (for example SQL) make it possible to join two relations. This operation is very common in desktop/server database systems but unfortunately query processing systems in networked embedded computer systems currently do not support this operation; specifically, the query processing systems TAG, TinyDB, Cougar do not support this. We show how a prioritized medium access control (MAC) protocol can be used to efficiently execute the database operation join for networked embedded computer systems where all computer nodes are in a single broadcast domain.