991 resultados para active constrained layer


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Layer 2/3 (L2/3) pyramidal neurons are the most abundant cells of the neocortex. Despite their key position in the cortical microcircuit, synaptic integration in dendrites of L2/3 neurons is far less understood than in L5 pyramidal cell dendrites, mainly because of the difficulties in obtaining electrical recordings from thin dendrites. Here we directly measured passive and active properties of the apical dendrites of L2/3 neurons in rat brain slices using dual dendritic-somatic patch-clamp recordings and calcium imaging. Unlike L5 cells, L2/3 dendrites displayed little sag in response to long current pulses, which suggests a low density of I(h) in the dendrites and soma. This was also consistent with a slight increase in input resistance with distance from the soma. Brief current injections into the apical dendrite evoked relatively short (half-width 2-4 ms) dendritic spikes that were isolated from the soma for near-threshold currents at sites beyond the middle of the apical dendrite. Regenerative dendritic potentials and large concomitant calcium transients were also elicited by trains of somatic action potentials (APs) above a critical frequency (130 Hz), which was slightly higher than in L5 neurons. Initiation of dendritic spikes was facilitated by backpropagating somatic APs and could cause an additional AP at the soma. As in L5 neurons, we found that distal dendritic calcium transients are sensitive to a long-lasting block by GABAergic inhibition. We conclude that L2/3 pyramidal neurons can generate dendritic spikes, sharing with L5 pyramidal neurons fundamental properties of dendritic excitability and control by inhibition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To assess the frequency of and risk factors for discordant responses at 6 months on highly active antiretroviral therapy (HAART) in previously treatment-naive HIV patients from resource-limited countries. METHODS: The Antiretroviral Therapy in Low-Income Countries Collaboration is a network of clinics providing care and treatment to HIV-infected patients in Africa, Latin America, and Asia. Patients who initiated therapy between 1996 and 2004, were aged 16 years or older, and had a baseline CD4 cell count were included in this analysis. Responses were defined based on plasma viral load (PVL) and CD4 cell count at 6 months as complete virologic and immunologic (VR(+)IR(+)), virologic only (VR(+)IR(-)), immunologic only (VR(-)IR(+)), and nonresponse (VR(-)IR(-)). Multinomial logistic regression was used to assess the association between therapy responses and clinical and demographic variables. RESULTS: Of the 3111 patients eligible for analysis, 1914 had available information at 6 months of therapy: 1074 (56.1%) were VR(+)IR(+), 364 (19.0%) were VR(+)IR(-), 283 (14.8%) were (VR(-)IR(+)), and 193 (10.1%) were VR(-)IR(-). OF THE 3111 patients eligible for analysis, 1914 had available information at 6 months of therapy: 1074 (56.1%) were VRIR, 364 (19.0%) were VRIR, 283 (14.8%) were (VRIR), and 193 (10.1%) were VRIR. Compared with complete responders, virologic-only responders were older, had a higher baseline CD4 cell count, had a lower baseline PVL, and were more likely to have received a nonstandard HAART regimen; immunologic-only responders were younger, had a lower baseline CD4 cell count, had a higher baseline PVL, and were more likely to have received a protease inhibitor-based regimen. CONCLUSIONS: The frequency of and risk factors for discordant responses were comparable to those observed in developed countries. Longer follow-up is needed to assess the long-term impact of discordant responses on mortality in these resource-limited settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: To compare the gender distribution of HIV-infected adults receiving highly active antiretroviral treatment (HAART) in resource-constrained settings with estimates of the gender distribution of HIV infection; to describe the clinical characteristics of women and men receiving HAART. METHODS: The Antiretroviral Therapy in Lower-Income Countries, ART-LINC Collaboration is a network of clinics providing HAART in Africa, Latin America, and Asia. We compared UNAIDS data on the gender distribution of HIV infection with the proportions of women and men receiving HAART in the ART-LINC Collaboration. RESULTS: Twenty-nine centers in 13 countries participated. Among 33,164 individuals, 19,989 (60.3%) were women. Proportions of women receiving HAART in ART-LINC centers were similar to, or higher than, UNAIDS estimates of the proportions of HIV-infected women in all but two centers. There were fewer women receiving HAART than expected from UNAIDS data in one center in Uganda and one center in India. Taking into account heterogeneity across cohorts, women were younger than men, less likely to have advanced HIV infection, and more likely to be anemic at HAART initiation. CONCLUSIONS: Women in resource-constrained settings are not necessarily disadvantaged in their access to HAART. More attention needs to be paid to ensuring that HIV-infected men are seeking care and starting HAART.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements on 27 June 2011 were performed over the Southern Iberian Peninsula at Granada EARLINET station, using active and passive remote sensing and airborne and surface in-situ data in order to study the entrainment processes between aerosols in the free troposphere and those in the planetary boundary layer (PBL). To this aim the temporal evolution of the lidar depolarisation, backscatter-related Angström exponent and potential temperature profiles were used in combination with the PBL contribution to the aerosol optical depth (AOD). Our results show that the mineral dust entrainment in the PBL was caused by the convective processes which ‘trapped’ the lofted mineral dust layer, distributing the mineral dust particles within the PBL. The temporal evolution of ground-based in-situ data evidenced the impact of this process at surface level. Finally, the amount of mineral dust in the atmospheric column available to be dispersed into the PBL was estimated by means of POLIPHON (Polarizing Lidar Photometer Networking). The dust mass concentration derived from POLIPHON was compared with the coarse-mode mass concentration retrieved with airborne in-situ measurements. Comparison shows differences below 50 µg/m³ (30% relative difference) indicating a relative good agreement between both techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of molecular composition of alkanes in bottom sediments of the southern part of Dvina Bay (White Sea) in October 2001 revealed the following main peculiarities of hydrocarbon behavior in the estuary: dominating of high molecular C23-C45 compounds and irregular distribution of hydrocarbons in bottom sediments as a result of high sedimentation rate and active hydrodynamics in the studied area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice shelves strongly impact coastal Antarctic sea-ice and the associated ecosystem through the formation of a sub-sea-ice platelet layer. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In this study, we applied a laterally-constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the landfast sea ice of Atka Bay, eastern Weddell Sea, in 2012. In addition to consistent fast-ice thickness and -conductivities along > 100 km transects; we present the first comprehensive, high resolution platelet-layer thickness and -conductivity dataset recorded on Antarctic sea ice. The reliability of the algorithm was confirmed by using synthetic data, and the inverted platelet-layer thicknesses agreed within the data uncertainty to drill-hole measurements. Ice-volume fractions were calculated from platelet-layer conductivities, revealing that an older and thicker platelet layer is denser and more compacted than a loosely attached, young platelet layer. The overall platelet-layer volume below Atka Bay fast ice suggests that the contribution of ocean/ice-shelf interaction to sea-ice volume in this region is even higher than previously thought. This study also implies that multi-frequency EM induction sounding is an effective approach in determining platelet layer volume on a larger scale than previously feasible. When applied to airborne multi-frequency EM, this method could provide a step towards an Antarctic-wide quantification of ocean/ice-shelf interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major aim of this study was to examine the influence of an embedded viscoelastic-plastic layer at different viscosity values on accretionary wedges at subduction zones. To quantify the effects of the layer viscosity, we analysed the wedge geometry, accretion mode, thrust systems and mass transport pattern. Therefore, we developed a numerical 2D 'sandbox' model utilising the Discrete Element Method. Starting with a simple pure Mohr Coulomb sequence, we added an embedded viscoelastic-plastic layer within the brittle, undeformed 'sediment' package. This layer followed Burger's rheology, which simulates the creep behaviour of natural rocks, such as evaporites. This layer got thrusted and folded during the subduction process. The testing of different bulk viscosity values, from 1 × 10**13 to 1 × 10**14 (Pa s), revealed a certain range where an active detachment evolved within the viscoelastic-plastic layer that decoupled the over- and the underlying brittle strata. This mid-level detachment caused the evolution of a frontally accreted wedge above it and a long underthrusted and subsequently basally accreted sequence beneath it. Both sequences were characterised by specific mass transport patterns depending on the used viscosity value. With decreasing bulk viscosities, thrust systems above this weak mid-level detachment became increasingly symmetrical and the particle uplift was reduced, as would be expected for a salt controlled forearc in nature. Simultaneously, antiformal stacking was favoured over hinterland dipping in the lower brittle layer and overturning of the uplifted material increased. Hence, we validated that the viscosity of an embedded detachment strongly influences the whole wedge mechanics, both the respective lower slope and the upper slope duplex, shown by e.g. the mass transport pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The comprehensive isotopic composition of atmospheric nitrate (i.e., the simultaneous measurement of all its stable isotope ratios: 15N/14N, 17O/16O and 18O/16O) has been determined for aerosol samples collected in the marine boundary layer (MBL) over the Atlantic Ocean from 65°S (Weddell Sea) to 79°N (Svalbard), along a ship-borne latitudinal transect. In nonpolar areas, the d15N of nitrate mostly deriving from anthropogenically emitted NOx is found to be significantly different (from 0 to 6 per mil) from nitrate sampled in locations influenced by natural NOx sources (-4 ± 2) per mil. The effects on d15N(NO3-) of different NOx sources and nitrate removal processes associated with its atmospheric transport are discussed. Measurements of the oxygen isotope anomaly (D17O = d17O - 0.52 × d18O) of nitrate suggest that nocturnal processes involving the nitrate radical play a major role in terms of NOx sinks. Different D17O between aerosol size fractions indicate different proportions between nitrate formation pathways as a function of the size and composition of the particles. Extremely low d15N values (down to -40 per mil) are found in air masses exposed to snow-covered areas, showing that snowpack emissions of NOx from upwind regions can have a significant impact on the local surface budget of reactive nitrogen, in conjunction with interactions with active halogen chemistry. The implications of the results are discussed in light of the potential use of the stable isotopic composition of nitrate to infer atmospherically relevant information from nitrate preserved in ice cores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At convergent margins, fluids rise through the forearc in response to consolidation of the upper plate and dewatering of the subducting plate, and produce various cold-seep-related features on the seafloor (mud diapirs, mud mounds). At the Central American forearc, authigenic carbonates precipitated from rising fluids within such structures during active venting while typical mixed-mud sediments were ejected onto the surrounding seafloor where they became intercalated with normal pelagic background sediments, indicating that mud mounds evolved unsteadily through alternating active and inactive phases. Intercalated regional ash layers from Plinian eruptions at the Central American volcanic arc provide time marks that constrain the ages of mud ejection activity. U/Th dating of drill core samples of authigenic carbonate caps of mud mounds yields ages agreeing well with those constrained by ash layers and showing that carbonate caps grow inward rather than outward during active venting. Both dating approaches show that offshore Nicaragua and Costa Rica (1) active and inactive phases can occur simultaneously at neighboring mounds, (2) mounds along the forearc have individual histories of activity, but there are distinct time intervals when nearly all mounds have been active or inactive, (3) lifetimes of mounds reach several hundred thousand years, and (4) highly active periods last 10-50 k.y. with intervening periods of >10 k.y. of relative quiescence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade pockmarks have proven to be important seabed features that provide information about fluid flow on continental margins. Their formation and dynamics are still poorly constrained due to the lack of proper three dimensional imaging of their internal structure. Numerous fluid escape features provide evidence for an active fluid-flow system on the Norwegian margin, specifically in the Nyegga region. In June-July 2006 a high-resolution seismic experiment using Ocean Bottom Seismometers (OBS) was carried out to investigate the detailed 3D structure of a pockmark named G11 in the region. An array of 14 OBS was deployed across the pockmark with 1 m location accuracy. Shots fired from surface towed mini GI guns were also recorded on a near surface hydrophone streamer. Several reflectors of high amplitude and reverse polarity are observed on the profiles indicating the presence of gas. Gas hydrates were recovered with gravity cores from less than a meter below the seafloor during the cruise. Indications of gas at shallow depths in the hydrate stability field show that methane is able to escape through the water-saturated sediments in the chimney without being entirely converted into gas hydrate. An initial 2D raytraced forward model of some of the P wave data along a line running NE-SW across the G11 pockmark shows, a gradual increase in velocity between the seafloor and a gas charged zone lying at ~300 m depth below the seabed. The traveltime fit is improved if the pockmark is underlain by velocities higher than in the surrounding layer corresponding to a pipe which ascends from the gas zone, to where it terminates in the pockmark as seen in the reflection profiles. This could be due to the presence of hydrates or carbonates within the sediments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of cysts of marine planktic infusoria was determined in oligotrophic waters of the central Indian Ocean and productive waters of the Southeast Pacific. Cyst biomass at stations studied varied from 1.2 to 23.4 ?g/l, which was 9.9-115.8% of free infusoria biomass in the 0-15 m layer in the Indian Ocean and 0.3-19.3% in the Southeast Pacific.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice shelves strongly interact with coastal Antarctic sea ice and the associated ecosystem by creating conditions favourable to the formation of a sub-ice platelet layer. The close investigation of this phenomenon and its seasonal evolution remain a challenge due to logistical constraints and a lack of suitable methodology. In this study, we characterize the seasonal cycle of Antarctic fast ice adjacent to the Ekström Ice Shelf in the eastern Weddell Sea. We used a thermistor chain with the additional ability to record the temperature response induced by cyclic heating of resistors embedded in the chain. Vertical sea-ice temperature and heating profiles obtained daily between November 2012 and February 2014 were analyzed to determine sea-ice and snow evolution, and to calculate the basal energy budget. The residual heat flux translated into an ice-volume fraction in the platelet layer of 0.18 ± 0.09, which we reproduced by a independent model simulation and agrees with earlier results. Manual drillings revealed an average annual platelet-layer thickness increase of at least 4m, and an annual maximum thickness of 10m beneath second-year sea ice. The oceanic contribution dominated the total sea-ice production during the study, effectively accounting for up to 70% of second-year sea-ice growth. In summer, an oceanic heat flux of 21 W/m**2 led to a partial thinning of the platelet layer. Our results further show that the active heating method, in contrast to the acoustic sounding approach, is well suited to derive the fast-ice mass balance in regions influenced by ocean/ice-shelf interaction, as it allows sub-diurnal monitoring of the platelet-layer thickness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A theoretical model for the steady-state response of anodic contactors that emit a plasma current Ii and collect electrons from a collisionless, unmagnetized plasma is presented. The use of a (kinetic) monoenergetic population for the attracted species, well known in passive probe theory, gives both accuracy and tractability to the theory. The monoenergetic population is proved to behave like an isentropic fluid with radial plus centripetal motion, allowing direct comparisons with ad hoc fluid models. Also, a modification of the original monoenergetic equations permits analysis of contactors operating in orbit-limited conditions. Besides that, the theory predicts that, only for plasma emissions above certain threshold current a presheath/double layer/core structure for the potential is formed (the core mode), while for emissions below that threshold, a plasma contactor behaves exactly as a positive-ion emitter with a presheath/sheath structure (the no-core mode). Ion emitters are studied as a particular case. Emphasis is placed on obtaining dimensionless charts and approximate asymptotic laws of the current-voltage characteristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composite laminates on the nanoscale have shown superior hardness and toughness, but little is known about their high temperature behavior. The mechanical properties (elastic modulus and hardness) were measured as a function of temperature by means of nanoindentation in Al/SiC nanolaminates, a model metal–ceramic nanolaminate fabricated by physical vapor deposition. The influence of the Al and SiC volume fraction and layer thicknesses was determined between room temperature and 150 °C and, the deformation modes were analyzed by transmission electron microscopy, using a focused ion beam to prepare cross-sections through selected indents. It was found that ambient temperature deformation was controlled by the plastic flow of the Al layers, constrained by the SiC, and the elastic bending of the SiC layers. The reduction in hardness with temperature showed evidence of the development of interface-mediated deformation mechanisms, which led to a clear influence of layer thickness on the hardness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.