986 resultados para Adaptation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ultimate goals of periodontal therapy remain the complete regeneration of those periodontal tissues lost to the destructive inflammatory-immune response, or to trauma, with tissues that possess the same structure and function, and the re-establishment of a sustainable health-promoting biofilm from one characterized by dysbiosis. This volume of Periodontology 2000 discusses the multiple facets of a transition from therapeutic empiricism during the late 1960s, toward regenerative therapies, which is founded on a clearer understanding of the biophysiology of normal structure and function. This introductory article provides an overview on the requirements of appropriate in vitro laboratory models (e.g. cell culture), of preclinical (i.e. animal) models and of human studies for periodontal wound and bone repair. Laboratory studies may provide valuable fundamental insights into basic mechanisms involved in wound repair and regeneration but also suffer from a unidimensional and simplistic approach that does not account for the complexities of the in vivo situation, in which multiple cell types and interactions all contribute to definitive outcomes. Therefore, such laboratory studies require validatory research, employing preclinical models specifically designed to demonstrate proof-of-concept efficacy, preliminary safety and adaptation to human disease scenarios. Small animal models provide the most economic and logistically feasible preliminary approaches but the outcomes do not necessarily translate to larger animal or human models. The advantages and limitations of all periodontal-regeneration models need to be carefully considered when planning investigations to ensure that the optimal design is adopted to answer the specific research question posed. Future challenges lie in the areas of stem cell research, scaffold designs, cell delivery and choice of growth factors, along with research to ensure appropriate gingival coverage in order to prevent gingival recession during the healing phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediment samples and hydrographic conditions were studied at 28 stations around Iceland. At these sites, Conductivity-Temperature-Depth (CTD) casts were conducted to collect hydrographic data and multicorer casts were conductd to collect data on sediment characteristics including grain size distribution, carbon and nitrogen concentration, and chloroplastic pigment concentration. A total of 14 environmental predictors were used to model sediment characteristics around Iceland on regional geographic space. For these, two approaches were used: Multivariate Adaptation Regression Splines (MARS) and randomForest regression models. RandomForest outperformed MARS in predicting grain size distribution. MARS models had a greater tendency to over- and underpredict sediment values in areas outside the environmental envelope defined by the training dataset. We provide first GIS layers on sediment characteristics around Iceland, that can be used as predictors in future models. Although models performed well, more samples, especially from the shelf areas, will be needed to improve the models in future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La agricultura es uno de los sectores más afectados por el cambio climático. A pesar de haber demostrado a lo largo de la historia una gran capacidad para adaptarse a nuevas situaciones, hoy en día la agricultura se enfrenta a nuevos retos tales como satisfacer un elevado crecimiento en la demanda de alimentos, desarrollar una agricultura sostenible con el medio ambiente y reducir las emisiones de gases de efecto invernadero. El potencial de adaptación debe ser definido en un contexto que incluya el comportamiento humano, ya que éste juega un papel decisivo en la implementación final de las medidas. Por este motivo, y para desarrollar correctamente políticas que busquen influir en el comportamiento de los agricultores para fomentar la adaptación a estas nuevas condiciones, es necesario entender previamente los procesos de toma de decisiones a nivel individual o de explotación, así como los efectos de los factores que determinan las barreras o motivaciones de la implementación de medidas. Esta Tesis doctoral trata de profundizar en el análisis de factores que influyen en la toma de decisiones de los agricultores para adoptar estrategias de adaptación al cambio climático. Este trabajo revisa la literatura actual y desarrolla un marco metodológico a nivel local y regional. Dos casos de estudio a nivel local (Doñana, España y Makueni, Kenia) han sido llevados a cabo con el fin de explorar el comportamiento de los agricultores hacia la adaptación. Estos casos de estudio representan regiones con notables diferencias en climatología, impactos del cambio climático, barreras para la adaptación y niveles de desarrollo e influencia de las instituciones públicas y privadas en la agricultura. Mientras el caso de estudio de Doñana representa un ejemplo de problemas asociados al uso y escasez del agua donde se espera que se agraven en el futuro, el caso de estudio de Makueni ejemplifica una zona fuertemente amenazada por las predicciones de cambio climático, donde adicionalmente la falta de infraestructura y la tecnología juegan un papel crucial para la implementación de la adaptación. El caso de estudio a nivel regional trata de generalizar en África el comportamiento de los agricultores sobre la implementación de medidas. El marco metodológico que se ha seguido en este trabajo abarca una amplia gama de enfoques y métodos para la recolección y análisis de datos. Los métodos utilizados para la toma de datos incluyen la implementación de encuestas, entrevistas, talleres con grupos de interés, grupos focales de discusión, revisión de estudios previos y bases de datos públicas. Los métodos analíticos incluyen métodos estadísticos, análisis multi‐criterio para la toma de decisiones, modelos de optimización de uso del suelo y un índice compuesto calculado a través de indicadores. Los métodos estadísticos se han utilizado con el fin de evaluar la influencia de los factores socio‐económicos y psicológicos sobre la adopción de medidas de adaptación. Dentro de estos métodos se incluyen regresiones logísticas, análisis de componentes principales y modelos de ecuaciones estructurales. Mientras que el análisis multi‐criterio se ha utilizado con el fin de evaluar las opciones de adaptación de acuerdo a las opiniones de las diferentes partes interesadas, el modelo de optimización ha tenido como fin analizar la combinación óptima de medidas de adaptación. El índice compuesto se ha utilizado para evaluar a nivel regional la implementación de medidas de adaptación en África. En general, los resultados del estudio ponen de relieve la gran importancia de considerar diferentes escalas espaciales a la hora de evaluar la implementación de medidas de adaptación al cambio climático. El comportamiento de los agricultores es diferente entre lugares considerados a una escala local relativamente pequeña, por lo que la generalización de los patrones del comportamiento a escalas regionales o globales resulta relativamente compleja. Los resultados obtenidos han permitido identificar factores determinantes tanto socioeconómicos como psicológicos y calcular su efecto sobre la adopción de medidas de adaptación. Además han proporcionado una mejor comprensión del distinto papel que desempeñan los cinco tipos de capital (natural, físico, financiero, social y humano) en la implementación de estrategias de adaptación. Con este trabajo se proporciona información de gran interés en los procesos de desarrollo de políticas destinadas a mejorar el apoyo de la sociedad a tomar medidas contra el cambio climático. Por último, en el análisis a nivel regional se desarrolla un índice compuesto que muestra la probabilidad de adoptar medidas de adaptación en las regiones de África y se analizan las causas que determinan dicha probabilidad de adopción de medidas. ABSTRACT Agriculture is and will continue to be one of the sectors most affected by climate change. Despite having demonstrated throughout history a great ability to adapt, agriculture today faces new challenges such as meeting growing food demands, developing sustainable agriculture and reducing greenhouse gas emissions. Adaptation policies planned on global, regional or local scales are ultimately implemented in decision‐making processes at the farm or individual level so adaptation potentials have to be set within the context of individual behaviour and regional institutions. Policy instruments can play a formative role in the adoption of such policies by addressing incentives/disincentives that influence farmer’s behaviour. Hence understanding farm‐level decision‐making processes and the influence of determinants of adoption is crucial when designing policies aimed at fostering adoption. This thesis seeks to analyse the factors that influence decision‐making by farmers in relation to the uptake of adaptation options. This work reviews the current knowledge and develops a methodological framework at local and regional level. Whilst the case studies at the local level are conducted with the purpose of exploring farmer’s behaviour towards adaptation the case study at the regional level attempts to up‐scale and generalise theory on adoption of farmlevel adaptation options. The two case studies at the local level (Doñana, Spain and Makueni, Kenya) encompass areas with different; climates, impacts of climate change, adaptation constraints and limits, levels of development, institutional support for agriculture and influence from public and private institutions. Whilst the Doñana Case Study represents an area plagued with water‐usage issues, set to be aggravated further by climate change, Makueni Case study exemplifies an area decidedly threatened by climate change where a lack of infrastructure and technology plays a crucial role in the uptake of adaptation options. The proposed framework is based on a wide range of approaches for collecting and analysing data. The approaches used for data collection include the implementation of surveys, interviews, stakeholder workshops, focus group discussions, a review of previous case studies, and public databases. The analytical methods include statistical approaches, multi criteria analysis for decision‐making, land use optimisation models, and a composite index based on public databases. Statistical approaches are used to assess the influence of socio‐economic and psychological factors on the adoption or support for adaptation measures. The statistical approaches used are logistic regressions, principal component analysis and structural equation modelling. Whilst a multi criteria analysis approach is used to evaluate adaptation options according to the different perspectives of stakeholders, the optimisation model analyses the optimal combination of adaptation options. The composite index is developed to assess adoption of adaptation measures in Africa. Overall, the results of the study highlight the importance of considering various scales when assessing adoption of adaptation measures to climate change. As farmer’s behaviour varies at a local scale there is elevated complexity when generalising behavioural patterns for farmers at regional or global scales. The results identify and estimate the effect of most relevant socioeconomic and psychological factors that influence adoption of adaptation measures to climate change. They also provide a better understanding of the role of the five types of capital (natural, physical, financial, social, and human) on the uptake of farm‐level adaptation options. These assessments of determinants help to explain adoption of climate change measures and provide helpful information in order to design polices aimed at enhancing societal support for adaptation policies. Finally the analysis at the regional level develops a composite index which suggests the likelihood of the regions in Africa to adopt farm‐level adaptation measures and analyses the main causes of this likelihood of adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date, crop models have been little used for characterising the types of cultivars suited to a changed climate, though simulations of altered management (e.g. sowing) are often reported. However, in neither case are model uncertainties evaluated at the same time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stream-mining approach is defined as a set of cutting-edge techniques designed to process streams of data in real time, in order to extract knowledge. In the particular case of classification, stream-mining has to adapt its behaviour to the volatile underlying data distributions, what has been called concept drift. Moreover, it is important to note that concept drift may lead to situations where predictive models become invalid and have therefore to be updated to represent the actual concepts that data poses. In this context, there is a specific type of concept drift, known as recurrent concept drift, where the concepts represented by data have already appeared in the past. In those cases the learning process could be saved or at least minimized by applying a previously trained model. This could be extremely useful in ubiquitous environments that are characterized by the existence of resource constrained devices. To deal with the aforementioned scenario, meta-models can be used in the process of enhancing the drift detection mechanisms used by data stream algorithms, by representing and predicting when the change will occur. There are some real-world situations where a concept reappears, as in the case of intrusion detection systems (IDS), where the same incidents or an adaptation of them usually reappear over time. In these environments the early prediction of drift by means of a better knowledge of past models can help to anticipate to the change, thus improving efficiency of the model regarding the training instances needed. By means of using meta-models as a recurrent drift detection mechanism, the ability to share concepts representations among different data mining processes is open. That kind of exchanges could improve the accuracy of the resultant local model as such model may benefit from patterns similar to the local concept that were observed in other scenarios, but not yet locally. This would also improve the efficiency of training instances used during the classification process, as long as the exchange of models would aid in the application of already trained recurrent models, that have been previously seen by any of the collaborative devices. Which it is to say that the scope of recurrence detection and representation is broaden. In fact the detection, representation and exchange of concept drift patterns would be extremely useful for the law enforcement activities fighting against cyber crime. Being the information exchange one of the main pillars of cooperation, national units would benefit from the experience and knowledge gained by third parties. Moreover, in the specific scope of critical infrastructures protection it is crucial to count with information exchange mechanisms, both from a strategical and technical scope. The exchange of concept drift detection schemes in cyber security environments would aid in the process of preventing, detecting and effectively responding to threads in cyber space. Furthermore, as a complement of meta-models, a mechanism to assess the similarity between classification models is also needed when dealing with recurrent concepts. In this context, when reusing a previously trained model a rough comparison between concepts is usually made, applying boolean logic. The introduction of fuzzy logic comparisons between models could lead to a better efficient reuse of previously seen concepts, by applying not just equal models, but also similar ones. This work faces the aforementioned open issues by means of: the MMPRec system, that integrates a meta-model mechanism and a fuzzy similarity function; a collaborative environment to share meta-models between different devices; a recurrent drift generator that allows to test the usefulness of recurrent drift systems, as it is the case of MMPRec. Moreover, this thesis presents an experimental validation of the proposed contributions using synthetic and real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deflection of the hair bundle atop a sensory hair cell modulates the open probability of mechanosensitive ion channels. In response to sustained deflections, hair cells adapt. Two fundamentally distinct models have been proposed to explain transducer adaptation. Both models support the notion that channel open probability is modulated by calcium that enters via the transduction channels. Both also suggest that the primary effect of adaptation is to shift the deflection-response [I(X)] relationship in the direction of the applied stimulus, thus maintaining hair bundle sensitivity. The models differ in several respects. They operate on different time scales: the faster on the order of a few milliseconds or less and the slower on the order of 10 ms or more. The model proposed to explain fast adaptation suggests that calcium enters and binds at or near the transduction channels to stabilize a closed conformation. The model proposed to explain the slower adaptation suggests that adaptation is mediated by an active, force-generating process that regulates the effective stimulus applied to the transduction channels. Here we discuss the evidence in support of each model and consider the possibility that both may function to varying degrees in hair cells of different species and sensory organs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cultural inheritance can be considered as a mechanism of adaptation made possible by communication, which has reached its greatest development in humans and can allow long-term conservation or rapid change of culturally transmissible traits depending on circumstances and needs. Conservativeness/flexibility is largely modulated by mechanisms of sociocultural transmission. An analysis was carried out by testing the fit of three models to 47 cultural traits (classified in six groups) in 277 African societies. Model A (demic diffusion) is conservation over generations, as shown by correlations of cultural traits with language, used as a measure of historical connection. Model B (environmental adaptation) is measured by correlation to the natural environment. Model C (cultural diffusion) is the spread to neighbors by social contact in an epidemic-like fashion and was tested by measuring the tightness of geographic clustering of the traits. Most traits examined, in particular those affecting family structure and kinship, showed great conservation over generations, as shown by the fit of model A. They are most probably transmitted by family members. This is in agreement with the theoretical demonstration that cultural transmission in the family (vertical) is the most conservative one. Some traits show environmental effects, indicating the importance of adaptation to physical environment. Only a few of the 47 traits showed tight geographic clustering indicating that their spread to nearest neighbors follows model C, as is usually the case for transmission among unrelated people (called horizontal transmission).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We generated draft genome sequences for two cold-adapted Archaea, Methanogenium frigidum and Methanococcoides burtonii, to identify genotypic characteristics that distinguish them from Archaea with a higher optimal growth temperature (OGT). Comparative genomics revealed trends in amino acid and tRNA composition, and structural features of proteins. Proteins from the cold-adapted Archaea are characterized by a higher content of noncharged polar amino acids, particularly Gin and Thr and a lower content of hydrophobic amino acids, particularly Leu. Sequence data from nine methanogen genomes (OGT 15degrees-98degreesC) were used to generate IIII modeled protein structures. Analysis of the models from the cold-adapted Archaea showed a strong tendency in the solvent-accessible area for more Gin, Thr, and hydrophobic residues and fewer charged residues. A cold shock domain (CSD) protein (CspA homolog) was identified in M. frigidum, two hypothetical proteins with CSD-folds in M. burtonii, and a unique winged helix DNA-binding domain protein in M. burtonii. This suggests that these types of nucleic acid binding proteins have a critical role in cold-adapted Archaea. Structural analysis of tRNA sequences from the Archaea indicated that GC content is the major factor influencing tRNA stability in hyperthermophiles, but not in the psychrophiles, mesophiles or moderate thermophiles. Below an OGT of 60degreesC, the GC content in tRNA was largely unchanged, indicating that any requirement for flexibility of tRNA in psychrophiles is mediated by other means. This is the first time that comparisons have been performed with genome data from Archaea spanning the growth temperature extremes. from psychrophiles to hyperthermophiles

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Elevated ocean temperatures can cause coral bleaching, the loss of colour from reef-building corals because of a breakdown of the symbiosis with the dinoflagellate Symbiodinium. Recent studies have warned that global climate change could increase the frequency of coral bleaching and threaten the long-term viability of coral reefs. These assertions are based on projecting the coarse output from atmosphere-ocean general circulation models (GCMs) to the local conditions around representative coral reefs. Here, we conduct the first comprehensive global assessment of coral bleaching under climate change by adapting the NOAA Coral Reef Watch bleaching prediction method to the output of a low- and high-climate sensitivity GCM. First, we develop and test algorithms for predicting mass coral bleaching with GCM-resolution sea surface temperatures for thousands of coral reefs, using a global coral reef map and 1985-2002 bleaching prediction data. We then use the algorithms to determine the frequency of coral bleaching and required thermal adaptation by corals and their endosymbionts under two different emissions scenarios. The results indicate that bleaching could become an annual or biannual event for the vast majority of the world's coral reefs in the next 30-50 years without an increase in thermal tolerance of 0.2-1.0 degrees C per decade. The geographic variability in required thermal adaptation found in each model and emissions scenario suggests that coral reefs in some regions, like Micronesia and western Polynesia, may be particularly vulnerable to climate change. Advances in modelling and monitoring will refine the forecast for individual reefs, but this assessment concludes that the global prognosis is unlikely to change without an accelerated effort to stabilize atmospheric greenhouse gas concentrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The net effect of sexual selection on nonsexual fitness is controversial. On one side, elaborate display traits and preferences for them can be costly, reducing the nonsexual fitness of individuals possessing them, as well as their offspring, In contrast, sexual selection may reinforce nonsexual fitness if an individual's attractiveness and quality are genetically correlated. According to recent models, such good-genes mate choice should increase both the extent and rate of adaptation. We evolved 12 replicate populations of Drosophila serrata in a powerful two-way factorial experimental design to test the separate and combined contributions of natural and sexual selection to adaptation to a novel larval food resource. Populations evolving in the presence of natural selection had significantly higher mean nonsexual fitness when measured over three generations (13-15) during the course of experimental evolution (16-23% increase). The effect of natural selection was even more substantial when measured in a standardized, monogamous mating environment at the end of the experiment (generation 16; 52% increase). In contrast, and despite strong sexual selection on display traits, there was no evidence from any of the four replicate fitness measures that sexual selection promoted adaptation. In addition, a comparison of fitness measures conducted under different mating environments demonstrated a significant direct cost of sexual selection to females, likely arising from some form of male-induced harm. Indirect benefits of sexual selection in promoting adaptation to this novel resource environment therefore appear to be absent in this species, despite prior evidence suggesting the operation of good-genes mate choice in their ancestral environment. How novel environments affect the operation of good-genes mate choice is a fundamental question for future sexual selection research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While classic intergroup theories have specified the processes explaining situational shifts in social identification, the processes whereby social identities change more profoundly and become integrated within the self have to be proposed. To this aim, the present studies investigate the processes by which group members integrate a new social identity as they are joining a new group. Combining a social identity approach and stress and coping models, this research tests if social factors (i.e., needs satisfied by fellow group members, social support), have an impact on the adaptation strategies group members use to deal with the novelty of the situation and to fit into their new group (seeking information & adopting group norms vs. disengaging). These strategies, in turn, should predict changes in level of identification with the new social group over time, as well as enhanced psychological adjustment. These associations are tested among university students over the course of their first academic year (Study 1), and among online gamers joining a newly established online community (Study 2). Path analyses provide support for the hypothesised associations. The results are discussed in light of recent theoretical developments pertaining to intraindividual changes in social identities and their integration in the self.