984 resultados para Teste de mutação e recombinação somática : Somatic mutation and recombination test : SMART


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les diagrammes de transitions d'états ont été réalisés avec le logiciel Latex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mutation and recombination processes are involved in the genetic and phenotypic variations of RNA viruses, leading to the emergence of new variant strains, and give rise to virus population diversity to be modeled by the host, particularly by the immune system, as occurred with infectious bronchitis virus (IBV) in chickens. The consequence is a continuous emergence of new IBV variants with regard to pathotypes, serotypes, and protectotypes. Nucleotide sequencing and subsequent genetic analysis of the S1 and N protein gene sequences provide a fast and accurate method to classify and predict IBV genotype, and a powerful instrument to monitor phylogenetic and epidemiological evolution of IBV variants. Despite the use of vaccination programmes, infectious bronchitis has become a serious problem in Brazil. Thus, a significant number of IBV field variants have been identified circulating in the Brazilian commercial poultries between 2000 to 2006 and more recently in Argentina. These viruses seem to be indigenous, because they demonstrated a low genetic relatedness with the majority of the reference strains from North America, Europe and Asia, but were moderately to highly related one to another. In summary, indigenous field IBV variants were evolving and circulating in the field in Brazil and Argentina, and should be considered as initial candidates for protection against current IBV infectious in chickens. However, in vitro and in vivo studies are needed to determine the pathogenicity and immunogenecity of these new isolates, before defining a new vaccine strain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bacterial mutation rates can increase and produce genetic novelty, as shown by in vitro and in silico experiments. Despite the cost due to a heavy deleterious mutation load, mutator alleles, which increase the mutation rate, can spread in asexual populations during adaptation because they remain associated with the rare favorable mutations they generate. This indirect selection for a genetic system generating diversity (second-order selection) is expected to be highly sensitive to changes in the dynamics of adaptation. Here we show by a simulation approach that even rare genetic exchanges, such as bacterial conjugation or transformation, can dramatically reduce the selection of mutators. Moreover, drift or competition between the processes of mutation and recombination in the course of adaptation reveal how second-order selection is unable to optimize the rate of generation of novelty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The HIV virus is known for its ability to exploit numerous genetic and evolutionary mechanisms to ensure its proliferation, among them, high replication, mutation and recombination rates. Sliding MinPD, a recently introduced computational method [1], was used to investigate the patterns of evolution of serially-sampled HIV-1 sequence data from eight patients with a special focus on the emergence of X4 strains. Unlike other phylogenetic methods, Sliding MinPD combines distance-based inference with a nonparametric bootstrap procedure and automated recombination detection to reconstruct the evolutionary history of longitudinal sequence data. We present serial evolutionary networks as a longitudinal representation of the mutational pathways of a viral population in a within-host environment. The longitudinal representation of the evolutionary networks was complemented with charts of clinical markers to facilitate correlation analysis between pertinent clinical information and the evolutionary relationships. Results Analysis based on the predicted networks suggests the following:: significantly stronger recombination signals (p = 0.003) for the inferred ancestors of the X4 strains, recombination events between different lineages and recombination events between putative reservoir virus and those from a later population, an early star-like topology observed for four of the patients who died of AIDS. A significantly higher number of recombinants were predicted at sampling points that corresponded to peaks in the viral load levels (p = 0.0042). Conclusion Our results indicate that serial evolutionary networks of HIV sequences enable systematic statistical analysis of the implicit relations embedded in the topology of the structure and can greatly facilitate identification of patterns of evolution that can lead to specific hypotheses and new insights. The conclusions of applying our method to empirical HIV data support the conventional wisdom of the new generation HIV treatments, that in order to keep the virus in check, viral loads need to be suppressed to almost undetectable levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accumulation of somatic mutations in mtDNA is correlated with aging. In this work, we sought to identify somatic mutations in the HVS-1 region (D-loop) of mtDNA that might be associated with aging. For this, we compared 31 grandmothers (mean age: 63 ± 2.3 years) and their 62 grandchildren (mean age: 15 ± 4.1 years), the offspring of their daughters. Direct DNA sequencing showed that mutations absent in the grandchildren were detected in a presumably homoplasmic state in three grandmothers and in a heteroplasmic state in an additional 13 grandmothers; no mutations were detected in the remaining 15 grandmothers. However, cloning followed by DNA sequencing in 12 grandmothers confirmed homoplasia in only one of the three mutations previously considered to be homoplasmic and did not confirm heteroplasmy in three out of nine grandmothers found to be heteroplasmic by direct sequencing. Thus, of 12 grandmothers in whom mtDNA was analyzed by cloning, eight were heteroplasmic for mutations not detected in their grandchildren. In this study, the use of genetically related subjects allowed us to demonstrate the occurrence of age-related (> 60 years old) mutations (homoplasia and heteroplasmy). It is possible that both of these situations (homoplasia and heteroplasmy) were a long-term consequence of mitochondrial oxidative phosphorylation that can lead to the accumulation of mtDNA mutations throughout life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Activating mutations of the anaplastic lymphoma receptor tyrosine kinase gene (ALK) were identified in both somatic and familial neuroblastoma. The most common somatic mutation, F1174L, is associated with NMYC amplification and displayed an efficient transforming activity in vivo. In addition, both AKL-F1174L and NMYC were shown cooperate in neuroblastoma tumorigenesis in animal models. To analyse the role of ALK mutations in the oncogenesis of neuroblastoma, ALK wt and various ALK mutants were transduced in murine neural crest stem cells (MONC1). Methods: ALK-wt, and F1174L, and R1275Q mutants were stably expressed by retroviral infection using the pMIGR1 vector in the murine neural crest stem cell line MONC-1, previously immortalised with v-myc, and further implanted subcutaneously or orthotopically in nude mice. Results: Both MONC1-ALK-F1174L and -R1275Q cells displayed a rapid tumour forming capacity upon subcutaneous injection in nude mice compared to control MONC1-MIGR or MONC1 cells. Interestingly, the transforming capacity of the F1174L mutant was much more potent compared to that of R1275Q mutant in murine neural crest stem cells, while ALK-wt was not tumorigenic. In addition, mice implanted orthotopically in the left adrenal gland with MONC1-ALK-F1174L cells developed highly aggressive tumours in 100% of mice within three weeks, while MONC1-Migr or MONC1 derived tumours displayed a longer latency and a reduced tumour take. Conclusions: The activating ALK-F1174L mutant is highly tumorigenic in neural crest stem cells. Nevertheless, we cannot exclude a functional implication of the v-myc oncogene used for MONC1 cells immortalisation. Indeed, the control MONC1-Migr and MONC1 cells were also able to derive subcutaneous and orthotopic tumours, although with considerable reduced efficiency. Further investigations using neural crest stem cell lacking exogenous myc expression are currently on way to assess the exclusive role of ALK mutations in NB oncogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal syndrome characterized by intravascular hemolysis mediated by complement, thrombotic events and alterations in hematopoiesis. Basically, the molecular events which underlie the complexity of the syndrome consist of the absence of the glycosylphosphatidylinositol (GPI) anchor as a consequence of somatic mutations in the PIG-A gene, located on the X chromosome. The GPI group is responsible for the attachment of many proteins to the cytoplasmic membrane. Two of them, CD55 and CD59, have a major role in the inhibition of the action of complement on the cellular membrane of blood cells. The absence of GPI biosynthesis can lead to PNH. Since mutations in the PIG-A gene are always present in patients with PNH, the aim of this study was to characterize the mutations in the PIG-A gene in Brazilian patients. The analysis of the PIG-A gene was performed using DNA samples derived from bone marrow and peripheral blood. Conformation-sensitive gel electrophoresis was used for screening the mutation and sequencing methods were used to identify the mutations. Molecular analysis permitted the identification of three point mutations in three patients: one G->A transition in the 5' portion of the second intron, one T->A substitution in the second base of codon 430 (Leu430->stop), and one deletion deltaA in the third base of codon 63. This study represents the first description of mutations in the PIG-A gene in a Brazilian population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente estudo está centrado na avaliação de amostras de água superficial coletadas em 8 pontos distribuídos na Região Hidrográfica do Guaíba que sofrem influência de atividade agrícola, urbana e/ou industrial. Foram realizadas 4 coletas, representativas das quatro estações do ano: setembro de 2000, agosto de 2001, fevereiro de 2002 e maio de 2003. Tais amostras foram avaliadas através do Teste para Detecção de Mutação e Recombinação Mitótica (SMART) em Drosophila melanogaster e do Teste de Micronúcleos com Bloqueio de Citocinese (CBMN) em cultura de linfócitos humanos. Ao mesmo tempo procurou-se correlacionar os dados obtidos com o Teste CBMN com os observados no SMART, na busca de estabelecer o provável mecanismo de ação das genotoxinas presentes nos rios que constituem esta grande área. Os dados obtidos a partir do emprego destas duas metodologias caracterizaram os rios Caí, Jacuí, Taquari, Sinos, Gravataí, Lago Guaíba, na Ponta da Cadeia (GPC) e Arroio Dilúvio, como indutores de toxicidade genética. Estes achados sugerem que, nas condições experimentais aplicadas, os poluentes ambientais induzem uma pluralidade de lesões no material genético das células somáticas, relacionadas com: mutação gênica e recombinação - detectada pelo SMART e/ou mutação cromossômica também diagnosticada pelo CBMN. O conjunto destes dados demonstra que cerca de 50% das amostras testadas (16/32) foram genotóxicas em um ou em ambos os testes. Além disso, o maior número de respostas positivas (4/19) foi observado nas águas provenientes do GPC e do Caí, seguido pelos rios Jacuí e Taquari (3/19), rio dos Sinos e Arroio Dilúvio (2/19), e rio Gravataí (1/19), Na verdade, a análise comparativa dos resultados, demonstrou que o ensaio CBMN foi mais sensível para a detecção de genotoxinas de origem ambiental, já que das 11 amostras classificadas como indutoras de eventos clastogênicos e aneugênicos neste teste somente 3 - Jacuí e Caí (Setembro de 2000), assim como GPC (Fevereiro de 2002) – foram diagnosticadas como positivas no SMART. Entretanto, justifica-se a inclusão do SMART na investigação de amostras ambientais em função deste teste privilegiar a detecção de um parâmetro genético ainda pouco considerado, mas que têm um papel crucial nos eventos relacionados com a carcinogênese – a recombinação homóloga. Desta forma, os dados obtidos através dos ensaios SMART e CBMN podem servir como um alerta relativo ao risco imposto pelas águas da Região Hidrográfica do Guaíba – o que compromete o abastecimento de água potável para mais de um milhão de pessoas. De fato, os principais impactos ambientais no Lago Guaíba são (i) o escoamento de esgotos domésticos de Porto Alegre; (ii) as águas contaminadas, principalmente, dos rios Gravataí e Sinos que desembocam no lago; (iii) efluentes provenientes das indústrias de produtos alimentares, metalurgia e celulose, localizadas nas suas margens; e (iv) grandes lançamentos de dejetos urbanos não tratados provenientes das águas do Arroio Dilúvio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dois pacientes índices da família analisada neste estudo foram submetidos a adrenalectomia bilateral devido a feocromocitoma. Foi, então, realizado o estudo genético dos pacientes e de sete parentes de primeiro grau. Os dois pacientes com feocromocitoma e dois outros membros assintomáticos da família apresentaram a mutação c496G>T no exon 3 do gene VHL. A família perdeu seguimento médico. Três anos após a realização da avaliação genética, a irmã dos pacientes, portadora da mutação, foi encaminhada para o nosso serviço após uma gestação complicada por pré-eclampsia. Ela referia paroxismos sugestivos de feocromocitoma, mas as metanefrinas urinárias eram negativas. Entretanto, a tomografia computadorizada de abdômen evidenciou uma massa adrenal que também se contrastou na cintilografia com metaiodobenzilguanidina (MIBG). Esse estudo mostra que a análise molecular do paciente índice pode levar à identificação de parentes assintomáticos portadores da mutação. Além disso, mesmo com as metanefrinas urinárias negativas, a identificação de uma mutação específica levou a um aumento da suspeita e detecção de feocromocitoma na irmã dos afetados pela doença.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inherited resistance to activated protein C caused by the factor V Leiden (FVL) mutation is the most common genetic cause of venous thrombosis yet described, being found in 20-60% of patients with venous thrombophilia. A relationship between the FVL mutation and an increased predisposition to arterial thrombosis in young women was recently reported. We assessed the prevalence of the FVL mutation in 440 individuals (880 chromosomes) belonging to four different ethnic groups: Caucasians, African Blacks, Asians and Amerindians. PCR amplification followed by MnlI digestion was employed to define the genotype. The FVL mutation was found in a heterozygous state in four out of 152 Whites (2.6%), one out of 151 Amerindians (0.6%), and was absent among 97 African Blacks and 40 Asians. Our results confirm that FVL has a heterogeneous distribution in different human populations, a fact that may contribute to geographic and ethnic differences in the prevalence of thrombotic diseases. In addition, these data may be helpful in decisions regarding the usefulness of screening for the FVL mutation in subjects at risk for thrombosis.