906 resultados para Variable structure systems
Resumo:
A low-cost circuit was developed for stable and efficient maximum power point (MPP) tracking in autonomous photo voltaic-motor systems with variable-frequency drives (VFDs). The circuit is made of two resistors, two capacitors, and two Zener diodes. Its input is the photovoltaic (PV) array voltage and its output feeds the proportional-integral-derivative (PID) controller usually integrated into, the drive. The steady-state frequency-voltage oscillations induced by the circuit were treated in a simplified mathematical model, which was validated by widely characterizing a PV-powered centrifugal pump. General procedures for circuit and controller tuning were recommended based on model equations. The tracking circuit presented here is widely applicable to PV-motor system with VFDs, offering an. efficient open-access technology of unique simplicity. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.
Resumo:
Universidad de Las Palmas de Gran Canaria. Facultad de Ciencias del Mar. Programa de doctorado en Oceanografía. Diploma de Estudios Avanzados
Resumo:
My PhD project has been focused on the study of the pulsating variable stars in two ultra-faint dwarf spheroidal satellites of the Milky Way, namely, Leo IV and Hercules; and in two fields of the Large Magellanic Cloud (namely, the Gaia South Ecliptic Pole calibration field, and the 30 Doradus region) that were repeatedly observed in the KS band by the VISTA Magellanic Cloud (VMC, PI M.R. Cioni) survey of the Magellanic System.
Resumo:
This PhD work was aimed to design, develop, and characterize gelatin-based scaffolds, for the repair of defects in the muscle-skeletal system. Gelatin is a biopolymer widely used for pharmaceutical and medical applications, thanks to its biodegradability and biocompatibility. It is obtained from collagen via thermal denaturation or chemical-physical degradation. Despite its high potential as biomaterial, gelatin exhibits poor mechanical properties and a low resistance in aqueous environment. Crosslinking treatment and enrichment with reinforcement materials are thus required for biomedical applications. In this work, gelatin based scaffolds were prepared following three different strategies: films were prepared through the solvent casting method, electrospinning technique was applied for the preparation of porous mats, and 3D porous scaffolds were prepared through freeze-drying. The results obtained on films put into evidence the influence of pH, crosslinking and reinforcement with montmorillonite (MMT), on the structure, stability and mechanical properties of gelatin and MMT/gelatin composites. The information acquired on the effect of crosslinking in different conditions was utilized to optimize the preparation procedure of electrospun and freeze-dried scaffolds. A successful method was developed to prepare gelatin nanofibrous scaffolds electrospun from acetic acid/water solution and stabilized with a non-toxic crosslinking agent, genipin, able to preserve their original morphology after exposure to water. Moreover, the co-electrospinning technique was used to prepare nanofibrous scaffolds at variable content of gelatin and polylactic acid. Preliminary in vitro tests indicated that the scaffolds are suitable for cartilage tissue engineering, and that their potential applications can be extended to cartilage-bone interface tissue engineering. Finally, 3D porous gelatin scaffolds, enriched with calcium phosphate, were prepared with the freeze-drying method. The results indicated that the crystallinity of the inorganic phase influences porosity, interconnectivity and mechanical properties. Preliminary in vitro tests show good osteoblast response in terms of proliferation and adhesion on all the scaffolds.
Resumo:
A thorough investigation was made of the structure-property relation of well-defined statistical, gradient and block copolymers of various compositions. Among the copolymers studied were those which were synthesized using isobornyl acrylate (IBA) and n-butyl acrylate (nBA) monomer units. The copolymers exhibited several unique properties that make them suitable materials for a range of applications. The thermomechanical properties of these new materials were compared to acrylate homopolymers. By the proper choice of the IBA/nBA monomer ratio, it was possible to tune the glass transition temperature of the statistical P(IBA-co-nBA) copolymers. The measured Tg’s of the copolymers with different IBA/nBA monomer ratios followed a trend that fitted well with the Fox equation prediction. While statistical copolymers showed a single glass transition (Tg between -50 and 90 ºC depending on composition), DSC block copolymers showed two Tg’s and the gradient copolymer showed a single, but very broad, glass transition. PMBL-PBA-PMBL triblock copolymers of different composition ratios were also studied and revealed a microphase separated morphology of mostly cylindrical PMBL domains hexagonally arranged in the PBA matrix. DMA studies confirmed the phase separated morphology of the copolymers. Tensile studies showed the linear PMBL-PBA-PMBL triblock copolymers having a relatively low elongation at break that was increased by replacing the PMBL hard blocks with the less brittle random PMBL-r-PMMA blocks. The 10- and 20-arm PBA-PMBL copolymers which were studied revealed even more unique properties. SAXS results showed a mixture of cylindrical PMBL domains hexagonally arranged in the PBA matrix, as well as lamellar. Despite PMBL’s brittleness, the triblock and multi-arm PBA-PMBL copolymers could become suitable materials for high temperature applications due to PMBL’s high glass transition temperature and high thermal stability. The structure-property relation of multi-arm star PBA-PMMA block copolymers was also investigated. Small-angle X-ray scattering revealed a phase separated morphology of cylindrical PMMA domains hexagonally arranged in the PBA matrix. DMA studies found that these materials possess typical elastomeric behavior in a broad range of service temperatures up to at least 250°C. The ultimate tensile strength and the elastic modulus of the 10- and 20-arm star PBA-PMMA block copolymers are significantly higher than those of their 3-arm or linear ABA type counterparts with similar composition, indicating a strong effect of the number of arms on the tensile properties. Siloxane-based copolymers were also studied and one of the main objectives here was to examine the possibility to synthesize trifluoropropyl-containing siloxane copolymers of gradient distribution of trifluoropropyl groups along the chain. DMA results of the PDMS-PMTFPS siloxane copolymers synthesized via simultaneous copolymerization showed that due to the large difference in reactivity rates of 2,4,6-tris(3,3,3-trifluoropropyl)-2,4,6-trimethylcyclotrisiloxane (F) and hexamethylcyclotrisiloxane (D), a copolymer of almost block structure containing only a narrow intermediate fragment with gradient distribution of the component units was obtained. A more dispersed distribution of the trifluoropropyl groups was obtained by the semi-batch copolymerization process, as the DMA results revealed more ‘‘pure gradient type’’ features for the siloxane copolymers which were synthesized by adding F at a controlled rate to the polymerization of the less reactive D. As with trifluoropropyl-containing siloxane copolymers, vinyl-containing polysiloxanes may be converted to a variety of useful polysiloxane materials by chemical modification. But much like the trifluoropropyl-containing siloxane copolymers, as a result of so much difference in the reactivities between the component units 2,4,6-trivinyl-2,4,6-trimethylcyclotrisiloxane (V) and hexamethylcyclotrisiloxane (D), thermal and mechanical properties of the PDMS-PMVS copolymers obtained by simultaneous copolymerization was similar to those of block copolymers. Only the copolymers obtained by semi-batch method showed properties typical for gradient copolymers.
Resumo:
The ability of block copolymers to spontaneously self-assemble into a variety of ordered nano-structures not only makes them a scientifically interesting system for the investigation of order-disorder phase transitions, but also offers a wide range of nano-technological applications. The architecture of a diblock is the most simple among the block copolymer systems, hence it is often used as a model system in both experiment and theory. We introduce a new soft-tetramer model for efficient computer simulations of diblock copolymer melts. The instantaneous non-spherical shape of polymer chains in molten state is incorporated by modeling each of the two blocks as two soft spheres. The interactions between the spheres are modeled in a way that the diblock melt tends to microphase separate with decreasing temperature. Using Monte Carlo simulations, we determine the equilibrium structures at variable values of the two relevant control parameters, the diblock composition and the incompatibility of unlike components. The simplicity of the model allows us to scan the control parameter space in a completeness that has not been reached in previous molecular simulations.The resulting phase diagram shows clear similarities with the phase diagram found in experiments. Moreover, we show that structural details of block copolymer chains can be reproduced by our simple model.We develop a novel method for the identification of the observed diblock copolymer mesophases that formalizes the usual approach of direct visual observation,using the characteristic geometry of the structures. A cluster analysis algorithm is used to determine clusters of each component of the diblock, and the number and shape of the clusters can be used to determine the mesophase.We also employ methods from integral geometry for the identification of mesophases and compare their usefulness to the cluster analysis approach.To probe the properties of our model in confinement, we perform molecular dynamics simulations of atomistic polyethylene melts confined between graphite surfaces. The results from these simulations are used as an input for an iterative coarse-graining procedure that yields a surface interaction potential for the soft-tetramer model. Using the interaction potential derived in that way, we perform an initial study on the behavior of the soft-tetramer model in confinement. Comparing with experimental studies, we find that our model can reflect basic features of confined diblock copolymer melts.
Resumo:
The presented thesis revolves around the study of thermally-responsive PNIPAAm-based hydrogels in water/based environments, as studied by Fluorescence Correlation Spectroscopy (FCS).rnThe goal of the project was the engineering of PNIPAAm gels into biosensors. Specifically, a gamma of such gels were both investigated concerning their dynamics and structure at the nanometer scale, and their performance in retaining bound bodies upon thermal collapse (which PNIPAAm undergoes upon heating above 32 ºC).rnFCS’s requirements, as a technique, match the limitations imposed by the system. Namely, the need to intimately probe a system in a solvent, which was also fragile and easy to alter. FCS, on the other hand, both requires a fluid environment to work, and is based on the observation of diffusion of fluorescents at nanomolar concentrations. FCS was applied to probe the hydrogels on the nanometer size with minimal invasivity.rnVariables in the gels were addressed in the project including crosslinking degree; structural changes during thermal collapse; behavior in different buffers; the possibility of decreasing the degree of inhomogeneity; behavior of differently sized probes; and the effectiveness of antibody functionalization upon thermal collapse.rnThe evidenced results included the heightening of structural inhomogeneities during thermal collapse and under different buffer conditions; the use of annealing to decrease the inhomogeneity degree; the use of differently sized probes to address different length scale of the gel; and the successful functionalization before and after collapse.rnThe thesis also addresses two side projects, also carried forward via FCS. One, diffusion in inverse opals, produced a predictive simulation model for diffusion of bodies in confined systems as dependent on the bodies’ size versus the characteristic sizes of the system. The other was the observation of interaction of bodies of opposite charge in a water solution, resulting in a phenomenological theory and an evaluation method for both the average residence time of the different bodies together, and their attachment likelihood.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Understanding the origins of the mechanical properties and its correlation withrnthe microstructure of gel systems is of great scientific and industrial interest. Inrngeneral, colloidal gels can be classified into chemical and physical gels, accordingrnto the life time of the network bonds. The characteristic di↵erences in gelationrndynamics can be observed with rheological measurements.rnAs a model system, a mixture of sodium silicate and low concentration sulfuric acidrnwas used. Nano-sized silica particles grow and aggregate to a system-spanning gelrnnetwork. The influence of the finite solubility of silica at high pH on the gelationrnwas studied with classical and piezo rheometer. The storage modulus of therngel grew logarithmically with time with two distinct growth laws. A relaxationrnat low frequency was observed in the frequency dependent measurements. I attributernthese two behaviors as a sign of structural rearrangements due to the finiternsolubility of silica at high pH. The reaction equilibrium between formation andrndissolution of bonds leads to a finite life time of the bonds and behavior similar tornphysical gel. The frequency dependence was more pronounced for lower water concentrations,rnhigher temperatures and shorter reaction times. With two relaxationrnmodels, I deduced characteristic relaxation times from the experimental data. Besidesrnrheology, the evolution of silica gels at high pH on di↵erent length scales wasrnstudied by NMR and dynamic light scattering. The results revealed that the primaryrnparticles existed already in sodium silicate and aggregated after the mixingrnof reactants due to a chemical reaction. Throughout the aggregation process thernsystem was in its chemical reaction equilibrium. Applying large oscillatory shearrnstrain to the gel allowed for modifying the gel modulus. The e↵ect of shear andrnshear history on the rheological properties of the gel were investigated. The storagernmodulus of the final gel increased with increasing strain. This behavior can be explained with (i) shear-induced aggregate compaction and (ii) combination ofrnbreakage and new formation of bonds.rnIn comparison with the physical gel-like behavior of the silica gel at high pH, typicalrnchemical gel features were exhibited by other gels formed from various chemicalrnreactions. Influences of the chemical structure modification on the gelation wererninvestigated with the piezo-rheometer. The external stimuli can be applied to tunernthe mechanical properties of the gel systems.
Resumo:
Antibodies which bind bioactive ligands can serve as a template for the generation of a second antibody which may react with the physiological receptor. This phenomenon of molecular mimicry by antibodies has been described in a variety of systems. In order to understand the chemical and molecular mechanisms involved in these interactions, monoclonal antibodies directed against two pharmacologically active alkaloids, morphine and nicotine, were carefully studied using experimental and theoretical molecular modeling techniques. The molecular characterization of these antibodies involved binding studies with ligand analogs and determination of the variable region amino acid sequence. A three-dimensional model of the anti-morphine binding site was constructed using computational and graphics display techniques. The antibody response in BALB/c mice to morphine appears relatively restricted, in that all of the antibodies examined in this study contained a $\lambda$ light chain, which is normally found in only 5% of mouse immunoglobulins. This study represents the first use of theoretical and experimental modeling techniques to describe the antigen binding site of a mouse Fv region containing a $\lambda$ light chain. The binding site model indicates that a charged glutamic acid residue and aromatic side chains are key features in ionic and hydrophobic interactions with the ligand morphine. A glutamic acid residue is found in the identical position in the anti-nicotine antibody and may play a role in binding nicotine. ^
Resumo:
We report numerical evidence of the effects of a periodic modulation in the delay time of a delayed dynamical system. By referring to a Mackey-Glass equation and by adding a modula- tion in the delay time, we describe how the solution of the system passes from being chaotic to shadow periodic states. We analyze this transition for both sinusoidal and sawtooth wave mod- ulations, and we give, in the latter case, the relationship between the period of the shadowed orbit and the amplitude of the modulation. Future goals and open questions are highlighted.
Resumo:
We propose a novel control scheme for bilateral teleoperation of n degree-of-freedom (DOF) nonlinear robotic systems with time-varying communication delay. A major contribution from this work lies in the demonstration that the structure of a state convergence algorithm can be also applied to nth-order nonlinear teleoperation systems. By choosing a Lyapunov Krasovskii functional, we show that the local-remote teleoperation system is asymptotically stable. The time delay of communication channel is assumed to be unknown and randomly time varying, but the upper bounds of the delay interval and the derivative of the delay are assumed to be known.
Resumo:
Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.
Resumo:
This article reviews recent studies of memory systems in humans and nonhuman primates. Three major conclusions from recent work are that (i) the capacity for nondeclarative (nonconscious) learning can now be studied in a broad array of tasks that assess classification learning, perceptuomotor skill learning, artificial grammar learning, and prototype abstraction; (ii) cortical areas adjacent to the hippocampal formation, including entorhinal, perirhinal, and parahippocampal cortices, are an essential part of the medial temporal lobe memory system that supports declarative (conscious) memory; and (iii) in humans, bilateral damage limited to the hippocampal formation is nevertheless sufficient to produce severe anterograde amnesia and temporally graded retrograde amnesia covering as much as 25 years.