977 resultados para structural elements
Resumo:
One of the most used methods in rapidprototyping is Fused Deposition Modeling (FDM), which provides components with a reasonable strength in plastic materials such as ABS and has a low environmental impact. However, the FDM process exhibits low levels of surface finishing, difficulty in getting complex and/or small geometries and low consistency in “slim” elements of the parts. Furthermore, “cantilever” elements need large material structures to be supported. The solution of these deficiencies requires a comprehensive review of the three-dimensional part design to enhance advantages and performances of FDM and reduce their constraints. As a key feature of this redesign a novel method of construction by assembling parts with structuraladhesive joints is proposed. These adhesive joints should be designed specifically to fit the plastic substrate and the FDM manufacturing technology. To achieve this, the most suitable structuraladhesiveselection is firstly required. Therefore, the present work analyzes five different families of adhesives (cyanoacrylate, polyurethane, epoxy, acrylic and silicone), and, by means of the application of technical multi-criteria decision analysis based on the analytic hierarchy process (AHP), to select the structuraladhesive that better conjugates mechanical benefits and adaptation to the FDM manufacturing process
Resumo:
The present work aims to assess Laser-Induced Plasma Spectrometry (LIPS) as a tool for the characterization of photovoltaic materials. Despite being a well-established technique with applications to many scientific and industrial fields, so far LIPS is little known to the photovoltaic scientific community. The technique allows the rapid characterization of layered samples without sample preparation, in open atmosphere and in real time. In this paper, we assess LIPS ability for the determination of elements that are difficult to analyze by other broadly used techniques, or for producing analytical information from very low-concentration elements. The results of the LIPS characterization of two different samples are presented: 1) a 90 nm, Al-doped ZnO layer deposited on a Si substrate by RF sputtering and 2) a Te-doped GaInP layer grown on GaAs by Metalorganic Vapor Phase Epitaxy. For both cases, the depth profile of the constituent and dopant elements is reported along with details of the experimental setup and the optimization of key parameters. It is remarkable that the longest time of analysis was ∼10 s, what, in conjunction with the other characteristics mentioned, makes of LIPS an appealing technique for rapid screening or quality control whether at the lab or at the production line.
Resumo:
El presente trabajo trata de elementos reforzados con barras de armadura y Fibras Metálicas Recicladas (FMR). El objetivo principal es mejorar el comportamiento a fisuración de elementos sometidos a flexión pura y a flexión compuesta, aumentando en consecuencia las prestaciones en servicio de aquellas estructuras con requerimientos estrictos con respecto al control de fisuración. Entre éstas últimas se encuentran las estructuras integrales, es decir aquellas estructuras sin juntas (puentes o edificios), sometidas a cargas gravitatorias y deformaciones impuestas en los elementos horizontales debidas a retracción, fluencia y temperatura. Las FMR son obtenidas a partir de los neumáticos fuera de uso, y puesto que el procedimiento de reciclado se centra en el caucho en vez que en el acero, su forma es aleatoria y con longitud variable. A pesar de que la eficacia del fibrorefuerzo mediante FMR ha sido demostrada en investigaciones anteriores, la innovación que representa este trabajo consiste en proponer la acción combinada de barras convencionales y FMR en la mejora del comportamiento a fisuración. El objetivo es por tanto mejorar la sostenibilidad del proyecto de la estructura en HA al utilizar materiales reciclados por un lado, y aumentando por el otro la durabilidad. En primer lugar, se presenta el estado del arte con respecto a la fisuración en elementos de HA, que sucesivamente se amplía a elementos reforzados con barras y fibras. Asimismo, se resume el método simplificado para el análisis de columnas de estructuras sin juntas ya propuesto por Pérez et al., con particular énfasis en aquellos aspectos que son incompatibles con la acción de las fibras a nivel seccional. A continuación, se presenta un modelo para describir la deformabilidad seccional y la fisuración en elementos en HA, que luego se amplía a aquellos elementos reforzados con barras y fibras, teniendo en cuenta también los efectos debidos a la retracción (tension stiffening negativo). El modelo es luego empleado para ampliar el método simplificado para el análisis de columnas. La aportación consiste por tanto en contar con una metodología amplia de análisis para este tipo de elementos. Seguidamente, se presenta la campaña experimental preliminar que ha involucrado vigas a escala reducida sometidas a flexión simple, con el objetivo de validar la eficiencia y la usabilidad en el hormigón de las FMR de dos diferentes tipos, y su comportamiento con respecto a fibras de acero comerciales. Se describe a continuación la campaña principal, consistente en ensayos sobre ocho vigas en flexión simple a escala 1:1 (variando contenido en FRM, Ø/s,eff y recubrimiento) y doce columnas a flexión compuesta (variando contenido en FMR, Ø/s,eff y nivel de fuerza axil). Los resultados obtenidos en la campaña principal son presentados y comentados, resaltando las mejoras obtenidas en el comportamiento a fisuración de las vigas y columnas, y la rigidez estructural de las columnas. Estos resultados se comparan con las predicciones del modelo propuesto. Los principales parámetros estudiados para describir la fisuración y el comportamiento seccional de las vigas son: la separación entre fisuras, el alargamiento medio de las armaduras y la abertura de fisura, mientras que en los ensayos de las columnas se ha contrastado las leyes momento/curvatura, la tensión en las barras de armadura y la abertura de fisura en el empotramiento en la base. La comparación muestra un buen acuerdo entre las predicciones y los resultados experimentales. Asimismo, se nota la mejora en el comportamiento a fisuración debido a la incorporación de FMR en aquellos elementos con cuantías de armadura bajas en flexión simple, en elementos con axiles bajos y para el control de la fisuración en elementos con grandes recubrimientos, siendo por tanto resultados de inmediato impacto en la práctica ingenieril (diseño de losas, tanques, estructuras integrales, etc.). VIIIComo punto final, se presentan aplicaciones de las FMR en estructuras reales. Se discuten dos casos de elementos sometidos a flexión pura, en particular una viga simplemente apoyada y un tanque para el tratamiento de agua. En ambos casos la adicción de FMR al hormigón lleva a mejoras en el comportamiento a fisuración. Luego, utilizando el método simplificado para el análisis en servicio de columnas de estructuras sin juntas, se calcula la máxima longitud admisible en casos típicos de puentes y edificación. En particular, se demuestra que las limitaciones de la práctica ingenieril actual (sobre todo en edificación) pueden ser aumentadas considerando el comportamiento real de las columnas en HA. Finalmente, los mismos casos son modificados para considerar el uso de MFR, y se presentan las mejoras tanto en la máxima longitud admisible como en la abertura de fisura para una longitud y deformación impuesta. This work deals with elements reinforced with both rebars and Recycled Steel Fibres (RSFs). Its main objective is to improve cracking behaviour of elements subjected to pure bending and bending and axial force, resulting in better serviceability conditions for these structures demanding keen crack width control. Among these structures a particularly interesting type are the so-called integral structures, i.e. long jointless structures (bridges and buildings) subjected to gravitational loads and imposed deformations due to shrinkage, creep and temperature. RSFs are obtained from End of Life Tyres, and due to the recycling process that is focused on the rubber rather than on the steel they come out crooked and with variable length. Although the effectiveness of RSFs had already been proven by previous research, the innovation of this work consists in the proposing the combined action of conventional rebars and RSFs to improve cracking behaviour. Therefore, the objective is to improve the sustainability of RC structures by, on the one hand, using recycled materials, and on the other improving their durability. A state of the art on cracking in RC elements is firstly drawn. It is then expanded to elements reinforced with both rebars and fibres (R/FRC elements). Finally, the simplified method for analysis of columns of long jointless structures already proposed by Pérez et al. is resumed, with a special focus on the points that conflict when taking into account the action of fibres. Afterwards, a model to describe sectional deformability and cracking of R/FRC elements is presented, taking also into account the effect of shrinkage (negative tension stiffening). The model is then used to implement the simplified method for columns. The novelty represented by this is that a comprehensive methodology to analyse this type of elements is presented. A preliminary experimental campaign consisting in small beams subjected to pure bending is described, with the objective of validating the effectiveness and usability in concrete of RSFs of two different types, and their behaviour when compared with commercial steel fibres. With the results and lessons learnt from this campaign in mind, the main experimental campaign is then described, consisting in cracking tests of eight unscaled beams in pure bending (varying RSF content, Ø/s,eff and concrete cover) and twelve columns subjected to imposed displacement and axial force (varying RSF content, Ø/s,eff and squashing load ratio). The results obtained from the main campaign are presented and discussed, with particular focus on the improvement in cracking behaviour for the beams and columns, and structural stiffness for the columns. They are then compared with the proposed model. The main parameters studied to describe cracking and sectional behaviours of the beam tests are crack spacing, mean steel strain and crack width, while for the column tests these were moment/curvature, stress in rebars and crack with at column embedment. The comparison showed satisfactory agreement between experimental results and model predictions. Moreover, it is pointed out the improvement in cracking behaviour due to the addition of RSF for elements with low reinforcement ratios, elements with low squashing load ratios and for crack width control of elements with large concrete covers, thus representing results with a immediate impact in engineering practice (slab design, tanks, integral structures, etc.). Applications of RSF to actual structures are finally presented. Two cases of elements in pure bending are presented, namely a simple supported beam and a water treatment tank. In both cases the addition of RSF to concrete leads to improvements in cracking behaviour. Then, using the simplified model for the serviceability analysis of columns of jointless structures, the maximum achievable jointless length of typical cases of a bridge and building is obtained. In XIIparticular, it is shown how the limitations of current engineering practice (this is especially the case of buildings) can be increased by considering the actual behaviour of RC supports. Then, the same cases are modified considering the use of RSF, and the improvements both in maximum achievable length and in crack width for a given length and imposed strain at the deck/first floor are shown.
Resumo:
If reinforced concrete structures are to be safe under extreme impulsive loadings such as explosions, a broad understanding of the fracture mechanics of concrete under such events is needed. Most buildings and infrastructures which are likely to be subjected to terrorist attacks are borne by a reinforced concrete (RC) structure. Up to some years ago, the traditional method used to study the ability of RC structures to withstand explosions consisted on a choice between handmade calculations, affordable but inaccurate and unreliable, and full scale experimental tests involving explosions, expensive and not available for many civil institutions. In this context, during the last years numerical simulations have arisen as the most effective method to analyze structures under such events. However, for accurate numerical simulations, reliable constitutive models are needed. Assuming that failure of concrete elements subjected to blast is primarily governed by the tensile behavior, a constitutive model has been built that accounts only for failure under tension while it behaves as elastic without failure under compression. Failure under tension is based on the Cohesive Crack Model. Moreover, the constitutive model has been used to simulate the experimental structural response of reinforced concrete slabs subjected to blast. The results of the numerical simulations with the aforementioned constitutive model show its ability of representing accurately the structural response of the RC elements under study. The simplicity of the model, which does not account for failure under compression, as already mentioned, confirms that the ability of reinforced concrete structures to withstand blast loads is primarily governed by tensile strength.
Resumo:
Numerical analysis is a suitable tool in the design of complex reinforced concrete structures under extreme impulsive loadings such as impacts or explosions at close range. Such events may be the result of terrorist attacks. Reinforced concrete is commonly used for buildings and infrastructures. For this reason, the ability to accurately run numerical simulations of concrete elements subjected to blast loading is needed. In this context, reliable constitutive models for concrete are of capital importance. In this research numerical simulations using two different constitutive models for concrete (Continuous Surface Cap Model and Brittle Damage Model) have been carried out using LS-DYNA. Two experimental benchmark tests have been taken as reference. The results of the numerical simulations with the aforementioned constitutive models show different abilities to accurately represent the structural response of the reinforced concrete elements studied.
Resumo:
The latest technology and architectural trends have significantly improved the use of a large variety of glass products in construction which, in function of their own characteristocs, allow to design and calculate structural glass elements under safety conditions. This paper presents the evaluation and analysis of the damping properties of rectangular laminated glass plates of 1.938 m x 0.876 m with different thickness depending on the number of PVB interlayers arranged. By means of numerical simulation and experimental verification, using modal analysis, natural frequencies and damping of the glass plates were calculated, both under free boundary conditions and operational conditions for the impact test equipment used in the experimental program, as the European standard UNE-EN 12600:2003 specifies.
Resumo:
Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.
Resumo:
The computational advantages of the use of different approaches -numerical and analytical ones- to the analysis of different parts of the same shell structure are discussed. Examples of large size problems that can be reduced to those more suitable to be handled by a personal related axisyrometric finite elements, local unaxisymmetric shells, geometric quasi-regular shells, infinite elements and homogenization techniques are described
Resumo:
A Mindlin plate with periodically distributed ribs patterns is analyzed by using homogenization techniques based on asymptotic expansion methods. The stiffness matrix of the homogenized plate is found to be dependent on the geometrical characteristics of the periodical cell, i.e. its skewness, plan shape, thickness variation etc. and on the plate material elastic constants. The computation of this plate stiffness matrix is carried out by averaging over the cell domain some solutions of different periodical boundary value problems. These boundary value problems are defined in variational form by linear first order differential operators on the cell domain and the boundary conditions of the variational equation correspond to a periodic structural problem. The elements of the stiffness matrix of homogenized plate are obtained by linear combinations of the averaged solution functions of the above mentioned boundary value problems. Finally, an illustrative example of application of this homogenization technique to hollowed plates and plate structures with ribs patterns regularly arranged over its area is shown. The possibility of using in the profesional practice the present procedure to the actual analysis of floors of typical buildings is also emphasized.
Resumo:
The Oct-1 POU domain binds diverse DNA-sequence elements and forms a higher-order regulatory complex with the herpes simplex virus coregulator VP16. The POU domain contains two separate DNA-binding domains joined by a flexible linker. By protein–DNA photocrosslinking we show that the relative positioning of the two POU DNA-binding domains on DNA varies depending on the nature of the DNA target. On a single VP16-responsive element, the POU domain adopts multiple conformations. To determine the structure of the Oct-1 POU domain in a multiprotein complex with VP16, we allowed VP16 to interact with previously crosslinked POU-domain–DNA complexes and found that VP16 can associate with multiple POU-domain conformations. These results reveal the dynamic potential of a DNA-binding domain in directing transcriptional regulatory complex formation.
Resumo:
Three novel families of transposable elements, Wukong, Wujin, and Wuneng, are described in the yellow fever mosquito, Aedes aegypti. Their copy numbers range from 2,100 to 3,000 per haploid genome. There are high degrees of sequence similarity within each family, and many structural but not sequence similarities between families. The common structural characteristics include small size, no coding potential, terminal inverted repeats, potential to form a stable secondary structure, A+T richness, and putative 2- to 4-bp A+T-biased specific target sites. Evidence of previous mobility is presented for the Wukong elements. Elements of these three families are associated with 7 of 16 fully or partially sequenced Ae. aegypti genes. Characteristics of these mosquito elements indicate strong similarities to the miniature inverted-repeat transposable elements (MITEs) recently found to be associated with plant genes. MITE-like elements have also been reported in two species of Xenopus and in Homo sapiens. This characterization of multiple families of highly repetitive MITE-like elements in an invertebrate extends the range of these elements in eukaryotic genomes. A hypothesis is presented relating genome size and organization to the presence of highly reiterated MITE families. The association of MITE-like elements with Ae. aegypti genes shows the same bias toward noncoding regions as in plants. This association has potentially important implications for the evolution of gene regulation.
Resumo:
Interactions among transcription factors that bind to separate sequence elements require bending of the intervening DNA and juxtaposition of interacting molecular surfaces in an appropriate orientation. Here, we examine the effects of single amino acid substitutions adjacent to the basic regions of Fos and Jun as well as changes in sequences flanking the AP-1 site on DNA bending. Substitution of charged amino acid residues at positions adjacent to the basic DNA-binding domains of Fos and Jun altered DNA bending. The change in DNA bending was directly proportional to the change in net charge for all heterodimeric combinations between these proteins. Fos and Jun induced distinct DNA bends at different binding sites. Exchange of a single base pair outside of the region contacted in the x-ray crystal structure altered DNA bending. Substitution of base pairs flanking the AP-1 site had converse effects on the opposite directions of DNA bending induced by homodimers and heterodimers. These results suggest that Fos and Jun induce DNA bending in part through electrostatic interactions between amino acid residues adjacent to the basic region and base pairs flanking the AP-1 site. DNA bending by Fos and Jun at inverted binding sites indicated that heterodimers bind to the AP-1 site in a preferred orientation. Mutation of a conserved arginine within the basic regions of Fos and transversion of the central C:G base pair in the AP-1 site to G:C had complementary effects on the orientation of heterodimer binding and DNA bending. The conformational variability of the Fos–Jun–AP-1 complex may contribute to its functional versatility at different promoters.
Resumo:
Mutant, but not wild-type p53 binds with high affinity to a variety of MAR-DNA elements (MARs), suggesting that MAR-binding of mutant p53 relates to the dominant-oncogenic activities proposed for mutant p53. MARs recognized by mutant p53 share AT richness and contain variations of an AATATATTT “DNA-unwinding motif,” which enhances the structural dynamics of chromatin and promotes regional DNA base-unpairing. Mutant p53 specifically interacted with MAR-derived oligonucleotides carrying such unwinding motifs, catalyzing DNA strand separation when this motif was located within a structurally labile sequence environment. Addition of GC-clamps to the respective MAR-oligonucleotides or introducing mutations into the unwinding motif strongly reduced DNA strand separation, but supported the formation of tight complexes between mutant p53 and such oligonucleotides. We conclude that the specific interaction of mutant p53 with regions of MAR-DNA with a high potential for base-unpairing provides the basis for the high-affinity binding of mutant p53 to MAR-DNA.
Resumo:
Eight novel families of miniature inverted repeat transposable elements (MITEs) were discovered in the African malaria mosquito, Anopheles gambiae, by using new software designed to rapidly identify MITE-like sequences based on their structural characteristics. Divergent subfamilies have been found in two families. Past mobility was demonstrated by evidence of MITE insertions that resulted in the duplication of specific TA, TAA, or 8-bp targets. Some of these MITEs share the same target duplications and similar terminal sequences with MITEs and other DNA transposons in human and other organisms. MITEs in A. gambiae range from 40 to 1340 copies per genome, much less abundant than MITEs in the yellow fever mosquito, Aedes aegypti. Statistical analyses suggest that most A. gambiae MITEs are in highly AT-rich regions, many of which are closely associated with each other. The analyses of these novel MITEs underscored interesting questions regarding their diversity, origin, evolution, and relationships to the host genomes. The discovery of diverse families of MITEs in A. gambiae has important practical implications in light of current efforts to control malaria by replacing vector mosquitoes with genetically modified refractory mosquitoes. Finally, the systematic approach to rapidly identify novel MITEs should have broad applications for the analysis of the ever-growing sequence databases of a wide range of organisms.
Resumo:
Several recent reports indicate that mobile elements are frequently found in and flanking many wild-type plant genes. To determine the extent of this association, we performed computer-based systematic searches to identify mobile elements in the genes of two "model" plants, Oryza sativa (domesticated rice) and Arabidopsis thaliana. Whereas 32 common sequences belonging to nine putative mobile element families were found in the noncoding regions of rice genes, none were found in Arabidopsis genes. Five of the nine families (Gaijin, Castaway, Ditto, Wanderer, and Explorer) are first described in this report, while the other four were described previously (Tourist, Stowaway, p-SINE1, and Amy/LTP). Sequence similarity, structural similarity, and documentation of past mobility strongly suggests that many of the rice common sequences are bona fide mobile elements. Members of four of the new rice mobile element families are similar in some respects to members of the previously identified inverted-repeat element families, Tourist and Stowaway. Together these elements are the most prevalent type of transposons found in the rice genes surveyed and form a unique collection of inverted-repeat transposons we refer to as miniature inverted-repeat transposable elements or MITEs. The sequence and structure of MITEs are clearly distinct from short or long interspersed nuclear elements (SINEs or LINEs), the most common transposable elements associated with mammalian nuclear genes. Mobile elements, therefore, are associated with both animal and plant genes, but the identity of these elements is strikingly different.