912 resultados para design space exploration
Resumo:
Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.
Resumo:
Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.
Resumo:
A incorporação de elementos da cultura primeira do estudante no processo de ensino-aprendizagem foi defendida pelo pedagogo francês Georges Snyders (1988) em sua obra \"A Alegria na Escola\". Esta pesquisa contribuiu com essa interface, identificando, no discurso de canções do rock n\' roll, elementos textuais que possibilitem reflexões no âmbito conceitual, epistemológico e sociopolítico sobre a exploração do espaço. O objeto de estudo neste trabalho são canções do período entre as décadas de 1960 e 1970 que possuem representações sobre a astronomia e as missões espaciais. O uso do rock justificou-se pelo fato de temas sobre exploração espacial aparecerem no trabalho de diversos artistas desse gênero musical, permitindo reflexões em nível conceitual, epistemológico e sociopolítico sobre a ciência, a tecnologia e suas relações com a sociedade e o ambiente. Além disso, identificamos que tanto o rock quanto as missões espaciais foram fenômenos culturais que dependeram em sua gênese dos avanços da tecnologia e da ciência e tiveram sua repercussão na sociedade através de processos midiáticos. Essas canções foram selecionadas entre os diversos gêneros de rock, e analisadas a partir de referenciais semiodiscursivos. As atividades foram aplicadas em situações formais de ensino - ensino médio e ensino superior -, em formação continuada de professores e projetos de ensino não formal na escola. No processo de ensinoaprendizagem, foram desenvolvidas atividades que envolviam leitura-comentada da canção, identificando na letra, melodia e harmonia, aspectos que evidenciavam um discurso crítico sobre a ciência e sua relação com a sociedade e o ambiente. Essas atividades envolveram três instâncias: Elaboração, Aplicação e Análise. Como referencial norteador dessas etapas, nos valemos das teorias socioculturais de Vigotski (2001), Snyders (1988) e Freire (2013).
Resumo:
Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.
Resumo:
Moral guardians, industry protectionists and concerned citizens (each with their own political agendas) had, or many years, lobbied various Ministers of Customs to regulate the offensive material which flooded the local market-the war effectively opened a protectionist climate in which the local publishing industry nourished (Johnson-Woods Pulp Friction). Though westerns dominated at first, science fiction (i.e., futuristic stories, alternative histories, space exploration, time-travel past and future ) soon followed. Within the decaying and yellowed pages, contemporary readers have a glimpse into the culture of the 1950s and, amusing though some of the predictions are, the cultural and social critic gains some understanding of the anxieties and desires of post-World War II Australia.
Resumo:
Rotating fluidised Beds offer the potential for high intensity combustion, large turndown and extended range of fluidising velocity due to the imposition of an artificial gravitational field. Low thermal capacity should also allow rapid response to load changes. This thesis describes investigations of the validity of these potential virtues. Experiments, at atmospheric pressure, were conducted in flow visualisation rigs and a combustor designed to accommodate a distributor 200mm diameter and 80mm axial length. Ancillary experiments were conducted in a 6" diameter conventional fluidised bed. The investigations encompassed assessment of; fluidisation and elutriation, coal feed requirements, start-up and steady-state combustion using premixed propane and air, transition from propane to coal combustion and mechanical design. Assessments were made of an elutriation model and some effects of particle size on the combustion of premixed fuel gas and air. The findings were: a) more reliable start-up and control methods must be developed. Combustion of premixed propane and air led to severe mechanical and operating problems. Manual control of coal combustion was inadequate. b) Design criteria must encompass pressure loss, mechanical strength and high temperature resistance. The flow characteristics of ancillaries and the distributor must be matcheo. c) Fluidisation of a range of particle sizes was investigated. New correlations for minimum fluidisation and fully supported velocities are proposed. Some effects on elutriation of particle size and the distance between the bed surface and exhaust port have been identified. A conic distributor did not aid initial bed distribution. Furthermore, airflow instability was encountered with this distributor shape. Future use of conic distributors is not recommended. Axial solids mixing was found to be poor. A coal feeder was developed which produced uniform fuel distribution throughout the bed. The report concludes that small scale inhibits development of mechanical design and exploration of performance. future research requires larger combustors and automatic control.
Resumo:
Adaptability for distributed object-oriented enterprise frameworks is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing environment. In this thesis, we propose a Meta Level Component-Based Framework (MELC) which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our novel approach of combining a meta architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. The critical nature of distributed technologies requires frameworks to be adaptable. Our framework employs a meta architecture. It supports dynamic adaptation of feasible design decisions in the framework design space by specifying and coordinating meta-objects that represent various aspects within the distributed environment. The meta architecture in MELC framework can provide the adaptability for system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed applications. The concept of using a meta architecture to produce an adaptable pattern-oriented framework for distributed computing applications is new and has not previously been explored in research. As the framework is adaptable, the proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address technical system issues in the domain of distributed computing and they can be woven together to shape the framework in future. We show how MELC can be used effectively to enable dynamic component integration and to separate system functionality from business functionality. We demonstrate how MELC provides an adaptable and dynamic run time environment using our system configuration and management utility. We also highlight how MELC will impose significant adaptability in system evolution through a prototype E-Bookshop application to assemble its business functions with distributed computing components at the meta level in MELC architecture. Our performance tests show that MELC does not entail prohibitive performance tradeoffs. The work to develop the MELC framework for distributed computing applications has emerged as a promising way to meet current and future challenges in the distributed environment.
Resumo:
The non-linear programming algorithms for the minimum weight design of structural frames are presented in this thesis. The first, which is applied to rigidly jointed and pin jointed plane frames subject to deflexion constraints, consists of a search in a feasible design space. Successive trial designs are developed so that the feasibility and the optimality of the designs are improved simultaneously. It is found that this method is restricted lo the design of structures with few unknown variables. The second non-linear programming algorithm is presented .in a general form. This consists of two types of search, one improving feasibility and the other optimality. The method speeds up the 'feasible direction' approach by obtaining a constant weight direction vector that is influenced by dominating constraints. For pin jointed plane and space frames this method is used to obtain a 'minimum weight' design which satisfies restrictions on stresses and deflexions. The matrix force method enables the design requirements to be expressed in a general form and the design problem is automatically formulated within the computer. Examples are given to explain the method and the design criteria are extended to include member buckling. Fundamental theorems are proposed and proved to confirm that structures are inter-related. These theorems are applicable to linear elastic structures and facilitate the prediction of the behaviour of one structure from the results of analysing another, more general, or related structure. It becomes possible to evaluate the significance of each member in the behaviour of a structure and the problem of minimum weight design is extended to include shape. A method is proposed to design structures of optimum shape with stress and deflexion limitations. Finally a detailed investigation is carried out into the design of structures to study the factors that influence their shape.
Resumo:
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.
Resumo:
Four bar mechanisms are basic components of many important mechanical devices. The kinematic synthesis of four bar mechanisms is a difficult design problem. A novel method that combines the genetic programming and decision tree learning methods is presented. We give a structural description for the class of mechanisms that produce desired coupler curves. Constructive induction is used to find and characterize feasible regions of the design space. Decision trees constitute the learning engine, and the new features are created by genetic programming.
Resumo:
This thesis explores aesthetization in general and fashion in particular in digital technology design and how we can design digital technology to account for the extended influences of fashion. The thesis applies a combination of methods to explore the new design space at the intersection of fashion and technology. First, it contributes to theoretical understandings of aesthetization and fashion institutionalization that influence digital technology design. We show that there is an unstable aesthetization in mobile design and the increased aesthetization is closely related to the fashion industry. Fashion emerged through shared institutional activities, which are usually in the form of action nets in the design of digital devices. “Tech Fashion” is proposed to interpret such dynamic action nets of institutional arrangements that make digital technology fashionable and desirable. Second, through associative design research, we have designed and developed two prototypes that account for institutionalized fashion values, such as the concept “outfit-centric accessory.” We call for a more extensive collaboration between fashion design and interaction design.
Resumo:
Current space exploration has transpired through the use of chemical rockets, and they have served us well, but they have their limitations. Exploration of the outer solar system, Jupiter and beyond will most likely require a new generation of propulsion system. One potential technology class to provide spacecraft propulsion and power systems involve thermonuclear fusion plasma systems. In this class it is well accepted that d-He3 fusion is the most promising of the fuel candidates for spacecraft applications as the 14.7 MeV protons carry up to 80% of the total fusion power while ‘s have energies less than 4 MeV. The other minor fusion products from secondary d-d reactions consisting of 3He, n, p, and 3H also have energies less than 4 MeV. Furthermore there are two main fusion subsets namely, Magnetic Confinement Fusion devices and Inertial Electrostatic Confinement (or IEC) Fusion devices. Magnetic Confinement Fusion devices are characterized by complex geometries and prohibitive structural mass compromising spacecraft use at this stage of exploration. While generating energy from a lightweight and reliable fusion source is important, another critical issue is harnessing this energy into usable power and/or propulsion. IEC fusion is a method of fusion plasma confinement that uses a series of biased electrodes that accelerate a uniform spherical beam of ions into a hollow cathode typically comprised of a gridded structure with high transparency. The inertia of the imploding ion beam compresses the ions at the center of the cathode increasing the density to the point where fusion occurs. Since the velocity distributions of fusion particles in an IEC are essentially isotropic and carry no net momentum, a means of redirecting the velocity of the particles is necessary to efficiently extract energy and provide power or create thrust. There are classes of advanced fuel fusion reactions where direct-energy conversion based on electrostatically-biased collector plates is impossible due to potential limits, material structure limitations, and IEC geometry. Thermal conversion systems are also inefficient for this application. A method of converting the isotropic IEC into a collimated flow of fusion products solves these issues and allows direct energy conversion. An efficient traveling wave direct energy converter has been proposed and studied by Momota , Shu and further studied by evaluated with numerical simulations by Ishikawa and others. One of the conventional methods of collimating charged particles is to surround the particle source with an applied magnetic channel. Charged particles are trapped and move along the lines of flux. By introducing expanding lines of force gradually along the magnetic channel, the velocity component perpendicular to the lines of force is transferred to the parallel one. However, efficient operation of the IEC requires a null magnetic field at the core of the device. In order to achieve this, Momota and Miley have proposed a pair of magnetic coils anti-parallel to the magnetic channel creating a null hexapole magnetic field region necessary for the IEC fusion core. Numerically, collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 95% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A while collimation of electrons with stabilization coil present was demonstrated to reach 69% at a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A. Experimentally, collimation of electrons with stabilization coil present was demonstrated experimentally to be 35% at 100 eV and reach a peak of 39.6% at 50eV with a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A and collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 49% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A 6.4% of the 300eV electrons’ initial velocity is directed to the collector plates. The remaining electrons are trapped by the collimator’s magnetic field. These particles oscillate around the null field region several hundred times and eventually escape to the collector plates. At a solenoid voltage profile of 7 Volts, 100 eV electrons are collimated with wall and perpendicular component losses of 31%. Increasing the electron energy beyond 100 eV increases the wall losses by 25% at 300 eV. Ultimately it was determined that a field strength deriving from 9.5 MAT/m would be required to collimate 14.7 MeV fusion protons from d-3He fueled IEC fusion core. The concept of the proton collimator has been proven to be effective to transform an isotropic source into a collimated flow of particles ripe for direct energy conversion.
Resumo:
Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorousbut- practical next-generation tools can deliver to developers of Cyber- Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously. This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the “agile research method” taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.
Resumo:
Torpor is a successful survival strategy displayed by several mammalian species to cope with harsh environmental conditions. A complex interplay of ambient, genetic and circadian stimuli acts centrally to induce a severe suppression of metabolic rate, usually followed by an apparently undefended reduction of body temperature. Some animals, such as marmots, are able to maintain this physiological state for months (hibernation), during which torpor bouts are periodically interrupted by short interbouts of normothermia (arousals). Interestingly, torpor adaptations have been shown to be associated with a large resistance towards stressors, such as radiation: indeed, if irradiated during torpor, hibernators can tolerate higher doses of radiation, showing an increased survival rate. New insights for radiotherapy and long-term space exploration could arise from the induction of torpor in non-hibernators, like humans. The present research project is centered on synthetic torpor (ST), a hypometabolic/hypothermic condition induced in a non-hibernator, the rat, through the pharmacological inhibition of the Raphe Pallidus, a key brainstem area controlling thermogenic effectors. By exploiting this procedure, this thesis aimed at: i) providing a multiorgan description of the functional cellular adaptations to ST; ii) exploring the possibility, and the underpinning molecular mechanisms, of enhanced radioresistance induced by ST. To achieve these aims, transcriptional and histological analysis have been performed in multiple organs of synthetic torpid rats and normothermic rats, either exposed or not exposed to 3 Gy total body of X-rays. The results showed that: i) similarly to natural torpor, ST induction leads to the activation of survival and stress resistance responses, which allow the organs to successfully adapt to the new homeostasis; ii) ST provides tissue protection against radiation damage, probably mainly through the cellular adaptations constitutively induced by ST, even though the triggering of specific responses when the animal is irradiated during hypothermia might play a role.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.