918 resultados para self-adaptive grid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the novel experimental measures presented in this paper is to show the improvement achieved in the computation time for a 2D self-adaptive hp finite element method (FEM) software accelerated through the Adaptive Cross Approximation (ACA) method. This algebraic method (ACA) was presented in an previous paper in the hp context for the analysis of open region problems, where the robust behaviour, good accuracy and high compression levels of ACA were demonstrated. The truncation of the infinite domain is settled through an iterative computation of the Integral Equation (IE) over a ficticious boundary, which, regardless its accuracy and efficiency, turns out to be the bottelneck of the code. It will be shown that in this context ACA reduces drastically the computational effort of the problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As an alternative to traditional evolutionary algorithms (EAs), population-based incremental learning (PBIL) maintains a probabilistic model of the best individual(s). Originally, PBIL was applied in binary search spaces. Recently, some work has been done to extend it to continuous spaces. In this paper, we review two such extensions of PBIL. An improved version of the PBIL based on Gaussian model is proposed that combines two main features: a new updating rule that takes into account all the individuals and their fitness values and a self-adaptive learning rate parameter. Furthermore, a new continuous PBIL employing a histogram probabilistic model is proposed. Some experiments results are presented that highlight the features of the new algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In earlier work we proposed the idea of requirements-aware systems that could introspect about the extent to which their goals were being satisfied at runtime. When combined with requirements monitoring and self adaptive capabilities, requirements awareness should help optimize goal satisfaction even in the presence of changing run-time context. In this paper we describe initial progress towards the realization of requirements-aware systems with REAssuRE. REAssuRE focuses on explicit representation of assumptions made at design time. When such assumptions are shown not to hold, REAssuRE can trigger system adaptations to alternative goal realization strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational reflection is a well-established technique that gives a program the ability to dynamically observe and possibly modify its behaviour. To date, however, reflection is mainly applied either to the software architecture or its implementation. We know of no approach that fully supports requirements reflection- that is, making requirements available as runtime objects. Although there is a body of literature on requirements monitoring, such work typically generates runtime artefacts from requirements and so the requirements themselves are not directly accessible at runtime. In this paper, we define requirements reflection and a set of research challenges. Requirements reflection is important because software systems of the future will be self-managing and will need to adapt continuously to changing environmental conditions. We argue requirements reflection can support such self-adaptive systems by making requirements first-class runtime entities, thus endowing software systems with the ability to reason about, understand, explain and modify requirements at runtime. © 2010 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Service-based systems are applications built by composing pre-existing services. During design time and according to the specifications, a set of services is selected. Both, service providers and consumers exist in a service market that is constantly changing. Service providers continuously change their quality of services (QoS), and service consumers can update their specifications according to what the market is offering. Therefore, during runtime, the services are periodically and manually checked to verify if they still satisfy the specifications. Unfortunately, humans are overwhelmed with the degree of changes exhibited by the service market. Consequently, verification of the compliance specification and execution of the corresponding adaptations when deviations are detected cannot be carried out in a manual fashion. In this work, we propose a framework to enable online awareness of changes in the service market in both consumers and providers by representing them as active software agents. At runtime, consumer agents concretize QoS specifications according to the available market knowledge. Services agents are collectively aware of themselves and of the consumers' requests. Moreover, they can create and maintain virtual organizations to react actively to demands that come from the market. In this paper we show preliminary results that allow us to conclude that the creation and adaptation of service-based systems can be carried out by a self-organized service market system. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Models@run.time (MRT) workshop series offers a discussion forum for the rising need to leverage modeling techniques for the software of the future. The main goals are to explore the benefits of models@run.time and to foster collaboration and cross-fertilization between different research communities like for example like model-driven engineering (e.g. MODELS), self-adaptive/autonomous systems communities (e.g., SEAMS and ICAC), the control theory community and the artificial intelligence community. © 2012 Authors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we study the self-organising behaviour of smart camera networks which use market-based handover of object tracking responsibilities to achieve an efficient allocation of objects to cameras. Specifically, we compare previously known homogeneous configurations, when all cameras use the same marketing strategy, with heterogeneous configurations, when each camera makes use of its own, possibly different marketing strategy. Our first contribution is to establish that such heterogeneity of marketing strategies can lead to system wide outcomes which are Pareto superior when compared to those possible in homogeneous configurations. However, since the particular configuration required to lead to Pareto efficiency in a given scenario will not be known in advance, our second contribution is to show how online learning of marketing strategies at the individual camera level can lead to high performing heterogeneous configurations from the system point of view, extending the Pareto front when compared to the homogeneous case. Our third contribution is to show that in many cases, the dynamic behaviour resulting from online learning leads to global outcomes which extend the Pareto front even when compared to static heterogeneous configurations. Our evaluation considers results obtained from an open source simulation package as well as data from a network of real cameras. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When designing a practical swarm robotics system, self-organized task allocation is key to make best use of resources. Current research in this area focuses on task allocation which is either distributed (tasks must be performed at different locations) or sequential (tasks are complex and must be split into simpler sub-tasks and processed in order). In practice, however, swarms will need to deal with tasks which are both distributed and sequential. In this paper, a classic foraging problem is extended to incorporate both distributed and sequential tasks. The problem is analysed theoretically, absolute limits on performance are derived, and a set of conditions for a successful algorithm are established. It is shown empirically that an algorithm which meets these conditions, by causing emergent cooperation between robots can achieve consistently high performance under a wide range of settings without the need for communication. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a "surprising" event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian dynamic decision networks. Copyright © 2014 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditionally, research on model-driven engineering (MDE) has mainly focused on the use of models at the design, implementation, and verification stages of development. This work has produced relatively mature techniques and tools that are currently being used in industry and academia. However, software models also have the potential to be used at runtime, to monitor and verify particular aspects of runtime behavior, and to implement self-* capabilities (e.g., adaptation technologies used in self-healing, self-managing, self-optimizing systems). A key benefit of using models at runtime is that they can provide a richer semantic base for runtime decision-making related to runtime system concerns associated with autonomic and adaptive systems. This book is one of the outcomes of the Dagstuhl Seminar 11481 on models@run.time held in November/December 2011, discussing foundations, techniques, mechanisms, state of the art, research challenges, and applications for the use of runtime models. The book comprises four research roadmaps, written by the original participants of the Dagstuhl Seminar over the course of two years following the seminar, and seven research papers from experts in the area. The roadmap papers provide insights to key features of the use of runtime models and identify the following research challenges: the need for a reference architecture, uncertainty tackled by runtime models, mechanisms for leveraging runtime models for self-adaptive software, and the use of models at runtime to address assurance for self-adaptive systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We identify two different forms of diversity present in engineered collective systems, namely heterogeneity (genotypic/phenotypic diversity) and dynamics (temporal diversity). Three qualitatively different case studies are analysed, and it is shown that both forms of diversity can be beneficial in very different problem and application domains. Behavioural diversity is shown to be motivated by input diversity and this observation is used to present recommendations for designers of collective systems.