961 resultados para Self-diffusion coefficients


Relevância:

30.00% 30.00%

Publicador:

Resumo:

How stable are individual differences in self-esteem? We examined the time-dependent decay of rank-order stability of self-esteem and tested whether stability asymptotically approaches zero or a nonzero value across long test–retest intervals. Analyses were based on 6 assessments across a 29-year period of a sample of 3,180 individuals aged 14 to 102 years. The results indicated that, as test–retest intervals increased, stability exponentially decayed and asymptotically approached a nonzero value (estimated as .43). The exponential decay function explained a large proportion of variance in observed stability coefficients, provided a better fit than alternative functions, and held across gender and for all age groups from adolescence to old age. Moreover, structural equation modeling of the individual-level data suggested that a perfectly stable trait component underlies stability of self-esteem. The findings suggest that the stability of self-esteem is relatively large, even across very long periods, and that self-esteem is a trait-like characteristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanisms of Ar release from K-feldspar samples in laboratory experiments and during their geological history are assessed here. Modern petrology clearly established that the chemical and isotopic record of minerals is normally dominated by aqueous recrystallization. The laboratory critique is trickier, which explains why so many conflicting approaches have been able to survive long past their expiration date. Current models are evaluated for self-consistency; especially Arrhenian non-linearity leads to paradoxes. The models’ testable geological predictions suggest that temperature-based downslope extrapolations often overestimate observed geological Ar mobility substantially. An updated interpretation is based on the unrelatedness of geological behaviour to laboratory experiments. The isotopic record of K-feldspar in geological samples is not a unique function of temperature, as recrystallisation promoted by aqueous fluids is the predominant mechanism controlling isotope transport. K-feldspar should therefore be viewed as a hygrochronometer. Laboratory degassing proceeds from structural rearrangements and phase transitions such as are observed in situ at high temperature in Na and Pb feldspars. These effects violate the mathematics of an inert Fick’s Law matrix and preclude downslope extrapolation. The similar upward-concave, non-linear shapes of Arrhenius trajectories of many silicates, hydrous and anhydrous, are likely common manifestations of structural rearrangements in silicate structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Affinity retardation chromatography (ARC), a method for the examination of low-affinity interactions, is mathematically described in order to characterize the method itself and to estimate binding coefficients of self-assembly domains of basement membrane protein laminin. Affinity retardation was determined by comparing the elutions on a "binding" and on a "nonreacting" column. It depends on the binding coefficient, the concentrations of both ligands, and the nonbinding elution position. Half maximal binding of the NH2-terminal domain of laminin B1-short arm to the A- and/or B2-short arms was estimated to occur at 10-17 microM for noncooperative and at < or = 3 microM for cooperative binding. A model of the laminin polymerization, postulating two levels of cooperative binding behavior, is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Taiji is a mind-body practice being increasingly investigated for its therapeutic benefits in a broad range of mental and physical conditions. The aim of the present study was to investigate potential preventive effects of Taiji practice in healthy individuals with regard to their depressive symptomatology and physical wellbeing. Methods: A total of 70 healthy Taiji novices (mean age 35.5 years) were randomly assigned to a Taiji intervention group, i.e. Taiji beginner course (Yang-Style Taiji, 2 hours per week, 12 weeks) or a waiting control group. Self-reported symptoms of depression (CES-D) and physical wellbeing (FEW-16) were assessed at baseline, at the end of the intervention, as well as two months later. Results: Physical wellbeing in the Taiji group significantly increased when comparing baseline to follow up (FEW-16 sum scale T(27) = 3.94, p = 0.001, 95% CI 0.17 - 0.55). Pearson’s correlation coefficients displayed a strong negative relationship between self-reported symptoms of depression and physical wellbeing (p’s < 0.001, r‘s ≥ -.54). Conclusions: In this randomized controlled trial we found significant evidence that a Taiji beginner course of three months duration elicits positive effects with respect to physical wellbeing in healthy individuals, with improvements pronouncing over time. Physical wellbeing was shown to have a strong relationship with depressive symptoms. Based on these results, the consideration of Taiji as one therapeutic option in the development of multimodal approaches in the prevention of depression seems justifiable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ordinal logistic regression models are used to analyze the dependant variable with multiple outcomes that can be ranked, but have been underutilized. In this study, we describe four logistic regression models for analyzing the ordinal response variable. ^ In this methodological study, the four regression models are proposed. The first model uses the multinomial logistic model. The second is adjacent-category logit model. The third is the proportional odds model and the fourth model is the continuation-ratio model. We illustrate and compare the fit of these models using data from the survey designed by the University of Texas, School of Public Health research project PCCaSO (Promoting Colon Cancer Screening in people 50 and Over), to study the patient’s confidence in the completion colorectal cancer screening (CRCS). ^ The purpose of this study is two fold: first, to provide a synthesized review of models for analyzing data with ordinal response, and second, to evaluate their usefulness in epidemiological research, with particular emphasis on model formulation, interpretation of model coefficients, and their implications. Four ordinal logistic models that are used in this study include (1) Multinomial logistic model, (2) Adjacent-category logistic model [9], (3) Continuation-ratio logistic model [10], (4) Proportional logistic model [11]. We recommend that the analyst performs (1) goodness-of-fit tests, (2) sensitivity analysis by fitting and comparing different models.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A single, nonlocal expression for the electron heat flux, which closely reproduces known results at high and low ion charge number 2, and “exact” results for the local limit at all 2, is derived by solving the kinetic equation in a narrow, tail-energy range. The solution involves asymptotic expansions of Bessel functions of large argument, and (Z-dependent)order above or below it, corresponding to the possible parabolic or hyperbolic character of the kinetic equation; velocity space diffusion in self-scattering is treated similarly to isotropic thermalization of tail energies in large Z analyses. The scale length H characterizing nonlocal effects varies with Z, suggesting an equal dependence of any ad hoc flux limiter. The model is valid for all H above the mean-free path for thermal electrons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the mathematical description of the temporal selfimaging effect is studied, focusing on the situation in which the train of pulses to be dispersed has been previously periodically modulated in phase and amplitude. It is demonstrated that, for each input pulse and for some specific values of the chromatic dispersion, a subtrain of optical pulses is generated whose envelope is determined by the Discrete Fourier Transform of the modulating coefficients. The mathematical results are confirmed by simulations of various examples and some limits on the realization of the theory are commented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electronic structure and properties of the orthorhombic phase of the CH 3 NH 3 PbI 3 perovskite are computed with density functional theory. The structure, optimized using a van der Waals functional, reproduces closely the unit cell volume. The experimental band gap is reproduced accurately by combining spin-orbit effects and a hybrid functional in which the fraction of exact exchange is tuned self-consistently to the optical dielectric constant. Including spin-orbit coupling strongly reduces the anisotropy of the effective mass tensor, predicting a low electron effective mass in all crystal directions. The computed binding energy of the unrelaxed exciton agrees with experimental data, and the values found imply a fast exciton dissociation at ambient temperature. Also polaron masses for the separated carriers are estimated. The values of all these parameters agree with recent indications that fast dynamics and large carrier diffusion lengths are key in the high photovoltaic efficiencies shown by these materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Desde mediados de la década de los 80 se está investigando sobre el hormigón autocompactante. Cada día, su uso en el mundo de la construcción es más común debido a sus numerosas ventajas como su excelente fluidez ya que puede fluir bajo su propio peso y llenar encofrados con formas complicadas y muy armados sin necesidad de compactaciones internas o externas. Por otra parte, la búsqueda de materiales más resistentes y duraderos, ha dado lugar a la incorporación de adiciones en materiales a base de cemento. En las últimas dos décadas, los ensayos con los nanomateriales, ha experimentado un gran aumento. Los resultados hasta ahora obtenidos pueden asumir no sólo un aumento en la resistencia de estos materiales, pero un cambio es su funcionalidad. Estas nanopartículas, concretamente la nanosílice, no sólo mejoran sus propiedades mecánicas y especialmente sus propiedades durables, sino que pueden implicar un cambio sustancial en las condiciones de uso y en su ciclo de vida. Este trabajo tiene como principal objetivo el estudio de las propiedades mecánicas, características microestructurales y durables de un hormigón autocompactante cuando se le agrega como adición nanosílice, microsílice y mezcla binarias de ambas, como adición al cemento. Para ello se han realizado 10 mezclas de hormigón. Se utilizó como referencia un hormigón autocompactante obtenido con cemento, caliza, árido, aditivo modificador de viscosidad Se han fabricado tres hormigones con la misma dosificación pero con diferentes contenidos de nanosílice. 2,5%, 5% y 7,5% Tres dosificaciones con adición de microsílice 2,5%, 5% y 7,5% y las tres restantes con mezclas binarias de nanosílice y microsílice con respectivamente2,5%-2,5%, 5%-2,5% y 2,5%-5%, sobre el peso del cemento. El contenido de superplastificante se modificó para conseguir las características de autocompactabilidad. Para observar los efectos de las adiciones añadidas al hormigón, se realiza una extensa campaña experimental. En ella se evaluaron en primer lugar, las características de autocompactabilidad del material en estado fresco, mediante los ensayos prescritos en la Instrucción Española del hormigón estructural EHE 08. Las propiedades mecánicas fueron evaluadas con ensayos de resistencia a compresión, resistencia a tracción indirecta y módulo de elasticidad. Las características microestructurales fueron analizadas mediante porosimetría por intrusión de mercurio, el análisis termogravimétrico y la microscopía electrónica de barrido. Para el estudio de la capacidad durable de las mezclas se realizaron ensayos de resistividad eléctrica, migración de cloruros, difusión de cloruros, carbonatación acelerada, absorción capilar y resistencia al hielo-deshielo. Los resultados ponen de manifiesto que la acción de las adiciones genera mejoras en las propiedades resistentes del material. Así, la adición de nanosílice proporciona mayores resistencias a compresión que la microsílice, sin embargo las mezclas binarias con bajas proporciones de adición producen mayores resistencias. Por otra parte, se observó mediante la determinación de las relaciones de gel/portlandita, que las mezclas que contienen nanosílice tienen una mayor actividad puzolánica que las que contienen microsílice. En las mezclas binarias se obtuvo como resultado que mientras mayor es el contenido de nanosílice en la mezcla mayor es la actividad puzolánica. Unido a lo anteriormente expuesto, el estudio de la porosidad da como resultado que la adición de nanosílice genera un refinamiento del tamaño de los poros mientras que la adición de microsílice disminuye la cantidad de los mismos sin variar el tamaño de poro medio. Por su parte, en las micrografías, se visualizó la formación de cristales procedentes de la hidratación del cemento. En ellas, se pudo observar, que al adicionar nanosílice, la velocidad de hidratación aumenta al aumentar la formación de monosulfoaluminatos con escasa presencia de etringita. Mientras que en las mezclas con adición de microsílice se observan mayor cantidad de cristales de etringita, lo que confirma que la velocidad de hidratación en estos últimos fue menor. Mediante el estudio de los resultados de las pruebas de durabilidad, se observó que no hay diferencias significativas entre el coeficiente de migración de cloruros y el coeficiente de difusión de cloruros en hormigones con adición de nano o microsílice. Aunque este coeficiente es ligeramente menor en mezclas con adición de microsílice. Sin embargo, en las mezclas binarias de ambas adiciones se obtuvo valores de los coeficientes de difusión o migración de cloruros inferiores a los obtenidos en mezclas con una única adición. Esto se evidencia en los resultados de las pruebas de resistividad eléctrica, de difusión de cloruros y de migración de cloruros. Esto puede ser debido a la suma de los efectos que producen el nano y micro adiciones en la porosidad. El resultado mostró que nanosílice tiene un papel importante en la reducción de los poros y la microsílice disminuye el volumen total de ellos. Esto permite definir la vida útil de estos hormigones a valores muy superiores a los exigidos por la EHE-08, por lo que es posible reducir, de forma notable, el recubrimiento exigido en ambiente de alta agresividad asegurando un buen comportamiento en servicio. Por otra parte, la pérdida de masa debido a los ciclos de congelación-descongelación es significativamente menor en los hormigones que contienen nanosílice que los que contienen microsílice. Este resultado está de acuerdo con el ensayo de absorción capilar. De manera general, se puede concluir que son las mezclas binarias y más concretamente la mezcla con un 5% de nanosílice y 2,5% de microsílice la que presenta los mejores resultados tanto en su comportamiento resistente con en su comportamiento durable. Esto puede ser debido a que en estas mezclas la nanosílice se comporta como un núcleo de activación de las reacciones puzolánicas rodeado de partículas de mayor tamaño. Además, el extraordinario comportamiento durable puede deberse también a la continuidad en la curva granulométrica por la existencia de la microsílice, el filler calizo, el cemento, la arena y la gravilla con tamaños de partículas que garantice mezclas muy compactas que presentan elevadas prestaciones. Since the middle of the decade of the 80 is being investigated about self-consolidating concrete. Every day, its use in the world of construction is more common due to their numerous advantages as its excellent fluidity such that it can flow under its own weight and fill formworks with complicated shapes and congested reinforcement without need for internal or external compactions. Moreover, the search for more resistant and durable materials, has led to the incorporation of additions to cement-based materials. In the last two decades, trials with nanomaterials, has experienced a large increase. The results so far obtained can assume not only an increase in the resistance of these materials but a change is its functionality. These nano particles, particularly the nano silica, not only improve their mechanical properties and especially its durable properties, but that may imply a substantial change in the conditions of use and in their life cycle. This work has as its main objective the study of the mechanical properties, the microstructural characteristics and durability capacity in one self-compacting concrete, when added as addition to cement: nano silica, micro silica o binary mixtures of both. To this effect, 10 concrete mixes have been made. As reference one with a certain amount of cement, limestone filler, viscosity modifying additive and water/binder relation. Furthermore they were manufactured with the same dosage three mix with addition of 2.5%, 5% and 7.5% of nano silica by weight of cement. Other three with 2.5%, 5% and 7.5% of micro silica and the remaining three with binary mixtures of 2.5%-2.5%, 5%-2.5% and 2.5%-5% of silica nano-micro silica respectively, b weight of cement, varying only the amount of superplasticizer to obtain concrete with characteristics of self-compactability. To observe the effects of the additions added to the concrete, an extensive experimental campaign was performed. It assessed, first, the characteristics of self-compactability of fresh material through the tests prescribed in the Spanish Structural Instruction Concrete EHE 08. The mechanical properties were evaluated by compression strength tests, indirect tensile strength and modulus of elasticity. The microstructural properties were analyzed by mercury intrusion porosimetry, thermogravimetric analysis and scanning electron microscopy. To study the durability, were performed electrical resistivity tests, migration and diffusion of chlorides, accelerated carbonation, capillary suction and resistance to freeze-thaw cycles. The results show that the action of the additions generates improvements in the strength properties of the material. Specifically, the addition of nano silica provides greater resistance to compression that the mix with micro silica, however binary mixtures with low addition rates generate higher strengths. Moreover, it was observed by determining relationships gel/portlandite, that the pozzolanic activity in the mixtures with nano silica was higher than in the mixtures with micro silica. In binary mixtures it was found that the highest content of nano silica in the mix is the one with the highest pozzolanic activity. Together with the foregoing, the study of the porosity results in the mixture with addition of nano silica generates a refinement of pore size while adding micro silica decreases the amount thereof without changing the average pore size. On the other hand, in the micrographs, the formation of crystals of cement hydration was visualized. In them, it was observed that by adding nano silica, the speed of hydration increases with increasing formation monosulfoaluminatos with scarce presence of ettringite. While in mixtures with addition of micro silica, ettringite crystals are observed, confirming that the hydration speed was lower in these mixtures. By studying the results of durability testing, it observed that no significant differences between the coefficient of migration of chlorides and coefficient of diffusion of chlorides in concretes with addition of nano or micro silica. Although this coefficient is slightly lower in mixtures with addition of micro silica. However, in binary mixtures of both additions was obtained values of coefficients of difusion o migration of chlorides lower than those obtained in mixtures with one of the additions. This is evidenced by the results of the tests electrical resistivity, diffusion of chlorides and migration of chlorides. This may be due to the sum of the effects that produced the nano and micro additions in the porosity. The result showed that nano silica has an important role in the pores refining and the micro silica decreases the total volume of them. This allows defining the life of these concretes in values to far exceed those required by the EHE-08, making it possible to reduce, significantly, the coating required in highly aggressive environment and to guarantee good behavior in service. Moreover, the mass loss due to freeze-thaw cycles is significantly lower in concretes containing nano silica than those containing micro silica. This result agrees with the capillary absorption test. In general, one can conclude that the binary mixture and more specifically the mixture with 5% of nano silica and 2.5% silica fume is which presents the best results in its durable behavior. This may be because in these mixtures, the nano silica behaves as cores activation of pozzolanic reactions. In addition, the durable extraordinary behavior may also be due to the continuity of the grading curve due to existence of micro silica, limestone filler, cement, sand and gravel with particle sizes that guarantees very compact mixtures which have high performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although weightlessness is known to affect living cells, the manner by which this occurs is unknown. Some reaction-diffusion processes have been theoretically predicted as being gravity-dependent. Microtubules, a major constituent of the cellular cytoskeleton, self-organize in vitro by way of reaction-diffusion processes. To investigate how self-organization depends on gravity, microtubules were assembled under low gravity conditions produced during space flight. Contrary to the samples formed on an in-flight 1 × g centrifuge, the samples prepared in microgravity showed almost no self-organization and were locally disordered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an overview of the statistical mechanics of self-organized criticality. We focus on the successes and failures of hydrodynamic description of transport, which consists of singular diffusion equations. When this description applies, it can predict the scaling features associated with these systems. We also identify a hard driving regime where singular diffusion hydrodynamics fails due to fluctuations and give an explicit criterion for when this failure occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The history of Castanea sativa (sweet chestnut) cultivation since medieval times has been well described on the basis of the very rich documentation available. Far fewer attempts have been made to give a historical synthesis of the events that led to the cultivation of sweet chestnut in much earlier times. In this article we attempt to reconstruct this part of the European history of chestnut cultivation and its early diffusion by use of different sources of information, such as pollen studies, archaeology, history and literature. Using this multidisciplinary approach, we have tried to identify the roles of the Greek and Roman civilizations in the dissemination of chestnut cultivation on a European scale. In particular, we show that use of the chestnut for food was not the primary driving force behind the introduction of the tree into Europe by the Romans. Apart from the Insubrian Region in the north of the Italian peninsula, no other centre of chestnut cultivation existed in Europe during the Roman period. The Romans may have introduced the idea of systematically cultivating and using chestnut. In certain cases they introduced the species itself; however no evidence of systematic planting of chestnut exists. The greatest interest in the management of chestnut for fruit production most probably developed after the Roman period and can be associated with the socio-economic structures of medieval times. It was then that self-sufficient cultures based on the cultivation of chestnut as a source of subsistence were formed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The short(s)-EMBU (Swedish acronym for Egna Minnen Betraffande Uppfostran [My memories of upbringing]) consists of 23 items, is based on the early 81-item EMBU, and was developed out of the necessity of having a brief measure of perceived parental rearing practices when the clinical and/or research context does not adequately permit application of time-consuming test batteries. The s-EMBU comprises three subscales: Rejection., Emotional Warmth, and (Over)Protection. The factorial and/or construct validity and reliability of the s-EMBU were examined in samples comprising a total of 1950 students from Australia, Spain, and Venezuela. The data were presented for the three national groups separately. Findings confirmed the cross-national validity of the factorial structure underlying the s-EMBU. Rejection by fathers and mothers was consistently associated with high trait-neuroticism and low self-esteem in recipients of both sexes in each nation, as was high parental emotional warmth with high femininity (humility). The findings on factorial validity are in keeping with previous ones obtained in East Germany, Greece, Guatemala, Hungary, Italy, and Sweden. The s-EMBU is again recommended for use in several different countries as. a reliable, functional equivalent to the original 81-item EMBU.