895 resultados para Genetic Algorithms, Adaptation, Internet Computing
Resumo:
Schmallenberg virus (SBV), an arthropod-borne orthobunyavirus was first detected in 2011 in cattle suffering from diarrhea and fever. The most severe impact of an SBV infection is the induction of malformations in newborns and abortions. Between 2011 and 2013 SBV spread throughout Europe in an unprecedented epidemic wave. SBV contains a tripartite genome consisting of the three negative-sense RNA segments L, M, and S. The virus is usually isolated from clinical samples by inoculation of KC (insect) or BHK-21 (mammalian) cells. Several virus passages are required to allow adaptation of SBV to cells in vitro. In the present study, the porcine SK-6 cell line was used for isolation and passaging of SBV. SK-6 cells proved to be more sensitive to SBV infection and allowed to produce higher titers more rapidly as in BHK-21 cells after just one passage. No adaptation was required. In order to determine the in vivo genetic stability of SBV during an epidemic spread of the virus the nucleotide sequence of the genome from seven SBV field isolates collected in summer 2012 in Switzerland was determined and compared to other SBV sequences available in GenBank. A total of 101 mutations, mostly transitions randomly dispersed along the L and M segment were found when the Swiss isolates were compared to the first SBV isolated late 2011 in Germany. However, when these mutations were studied in detail, a previously described hypervariable region in the M segment was identified. The S segment was completely conserved among all sequenced SBV isolates. To assess the in vitro genetic stability of SBV, three isolates were passage 10 times in SK-6 cells and sequenced before and after passaging. Between two and five nt exchanges per genome were found. This low in vitro mutation rate further demonstrates the suitability of SK-6 cells for SBV propagation.
Resumo:
Gene flow is usually thought to reduce genetic divergence and impede local adaptation by homogenising gene pools between populations. However, evidence for local adaptation and phenotypic differentiation in highly mobile species, experiencing high levels of gene flow, is emerging. Assessing population genetic structure at different spatial scales is thus a crucial step towards understanding mechanisms underlying intraspecific differentiation and diversification. Here, we studied the population genetic structure of a highly mobile species – the great tit Parus major – at different spatial scales. We analysed 884 individuals from 30 sites across Europe including 10 close-by sites (< 50 km), using 22 microsatellite markers. Overall we found a low but significant genetic differentiation among sites (FST = 0.008). Genetic differentiation was higher, and genetic diversity lower, in south-western Europe. These regional differences were statistically best explained by winter temperature. Overall, our results suggest that great tits form a single patchy metapopulation across Europe, in which genetic differentiation is independent of geographical distance and gene flow may be regulated by environmental factors via movements related to winter severity. This might have important implications for the evolutionary trajectories of sub-populations, especially in the context of climate change, and calls for future investigations of local differences in costs and benefits of philopatry at large scales.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
The Departmento de Arica in northern Chile was chosen as the investigation site for a study of the role of certain hematologic and glycolytic variables in the physiological and genetic adaptation to hypoxia.^ The population studied comprised 876 individuals, residents of seven villages at three altitudes: coast (0-500m), sierra (2,500-3,500m) and altiplano (> 4,000m). There was an equal number of males and females ranging in ages from six to 90 years. Although predominantly Aymara, those of mixed or Spanish origin were also examined. The specimens were collected in heparinized vacutainers precipitated with cold trichloroacetic acid (TCA) and immediately frozen to -196(DEGREES)C. Six variables were measured. Three were hematologic: hemoglobin, hematocrit and mean cell hemoglobin concentration. The three others were glycolytic: erythrocyte 2,3-diphosphoglycerate (DPG), adenosine triphosphate (ATP) and the percentage of phosphates (DPG + ATP) in the form of DPG.^ Hemoglobin and hematocrit were measured on site. The DPG and ATP content was assayed in specimens which had been frozen at -196(DEGREES)C and transported to Houston. Structured interviews on site provided information as to lifestyle and family pedigrees.^ The following results were obtained: (1) The actual village, rather than the altitude, of examination accounted for the greatest proportion of the variance in all variables. In the coast, a large difference in levels of ionic lithium in the drinking water exists. The chemical environment of food and drink is postulated to account, in part, for the importance of geographic location in explaining the observed variance. (2) Measurements of individuals from the two extreme altitudes, coast and altiplano, did not exhibit the same relationship with age and body mass. The hematologic variables were significantly related to both age and body build in the coast. The glycolytic variables were significantly related to age and body mass in the altiplano. (3) The environment modified male values more than female values in all variables. The two sexes responded quite differently to age and changes in body mass as well. The question of differing adaptability of the two sexes is discussed. (4) Environmental factors explained a significantly higher proportion of total variability in the altiplano than in the coast for hemoglobin, hematocrit and DPG. Most of the ATP variability at both altitudes is explained by genetic factors. ^
Resumo:
Genetic investigations on eukaryotic plankton confirmed the existence of modern biogeographic patterns, but analyses of palaeoecological data exploring the temporal variability of these patterns have rarely been presented. Ancient sedimentary DNA proved suitable for investigations of past assemblage turnover in the course of environmental change, but genetic relatedness of the identified lineages has not yet been undertaken. Here, we investigate the relatedness of diatom lineages in Siberian lakes along environmental gradients (i.e. across treeline transects), over geographic distance and through time (i.e. the last 7000 years) using modern and ancient sedimentary DNA. Our results indicate that closely-related Staurosira lineages occur in similar environments and less-related lineages in dissimilar environments, in our case different vegetation and co-varying climatic and limnic variables across treeline transects. Thus our study reveals that environmental conditions rather than geographic distance is reflected by diatom-relatedness patterns in space and time. We tentatively speculate that the detected relatedness pattern in Staurosira across the treeline could be a result of adaptation to diverse environmental conditions across the arctic boreal treeline, however, a geographically-driven divergence and subsequent repopulation of ecologically different habitats might also be a potential explanation for the observed pattern.
Resumo:
One hypothesis for the success of invasive species is reduced pathogen burden, resulting from a release from infections or high immunological fitness (low immunopathology) of invaders. Despite of strong selection exerted on the host, the evolutionary response of invaders to newly acquired pathogens has rarely been considered. The two independent and genetically distinct invasions of the Pacific oyster Crassostrea gigas into the North Sea represent an ideal model system to study fast evolutionary responses of invasive populations. By exposing both invasion sources to ubiquitous and phylogenetically diverse pathogens (Vibrio spp.) we demonstrate that within a few generations hosts adapted to sympatric pathogen communities. However, this local adaptation only became apparent in selective environments, i.e. at elevated temperatures reflecting patterns of disease outbreaks in natural populations. Resistance against sympatric and allopatric Vibrio spp. strains was dominantly inherited in crosses between both invasion sources, resulting in an overall higher resistance of admixed individuals than pure lines. Therefore we suggest that a simple genetic resistance mechanism of the host is matched to a common virulence mechanism shared by local Vibrio strains. This combination might have facilitated a fast evolutionary response that can explain another dimension of why invasive species can be so successful in newly invaded ranges.
Application of the Extended Kalman filter to fuzzy modeling: Algorithms and practical implementation
Resumo:
Modeling phase is fundamental both in the analysis process of a dynamic system and the design of a control system. If this phase is in-line is even more critical and the only information of the system comes from input/output data. Some adaptation algorithms for fuzzy system based on extended Kalman filter are presented in this paper, which allows obtaining accurate models without renounce the computational efficiency that characterizes the Kalman filter, and allows its implementation in-line with the process
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
This paper addresses the modelling and validation of an evolvable hardware architecture which can be mapped on a 2D systolic structure implemented on commercial reconfigurable FPGAs. The adaptation capabilities of the architecture are exercised to validate its evolvability. The underlying proposal is the use of a library of reconfigurable components characterised by their partial bitstreams, which are used by the Evolutionary Algorithm to find a solution to a given task. Evolution of image noise filters is selected as the proof of concept application. Results show that computation speed of the resulting evolved circuit is higher than with the Virtual Reconfigurable Circuits approach, and this can be exploited on the evolution process by using dynamic reconfiguration
Resumo:
After the extraordinary spread of the World Wide Web during the last fifteen years, engineers and developers are pushing now the Internet to its next border. A new conception in computer science and networks communication has been burgeoning during roughly the last decade: a world where most of the computers of the future will be extremely downsized, to the point that they will look like dust at its most advanced prototypes. In this vision, every single element of our “real” world has an intelligent tag that carries all their relevant data, effectively mapping the “real” world into a “virtual” one, where all the electronically augmented objects are present, can interact among them and influence with their behaviour that of the other objects, or even the behaviour of a final human user. This is the vision of the Internet of the Future, which also draws ideas of several novel tendencies in computer science and networking, as pervasive computing and the Internet of Things. As it has happened before, materializing a new paradigm that changes the way entities interrelate in this new environment has proved to be a goal full of challenges in the way. Right now the situation is exciting, with a plethora of new developments, proposals and models sprouting every time, often in an uncoordinated, decentralised manner away from any standardization, resembling somehow the status quo of the first developments of advanced computer networking, back in the 60s and the 70s. Usually, a system designed after the Internet of the Future will consist of one or several final user devices attached to these final users, a network –often a Wireless Sensor Network- charged with the task of collecting data for the final user devices, and sometimes a base station sending the data for its further processing to less hardware-constrained computers. When implementing a system designed with the Internet of the Future as a pattern, issues, and more specifically, limitations, that must be faced are numerous: lack of standards for platforms and protocols, processing bottlenecks, low battery lifetime, etc. One of the main objectives of this project is presenting a functional model of how a system based on the paradigms linked to the Internet of the Future works, overcoming some of the difficulties that can be expected and showing a model for a middleware architecture specifically designed for a pervasive, ubiquitous system. This Final Degree Dissertation is divided into several parts. Beginning with an Introduction to the main topics and concepts of this new model, a State of the Art is offered so as to provide a technological background. After that, an example of a semantic and service-oriented middleware is shown; later, a system built by means of this semantic and service-oriented middleware, and other components, is developed, justifying its placement in a particular scenario, describing it and analysing the data obtained from it. Finally, the conclusions inferred from this system and future works that would be good to be tackled are mentioned as well. RESUMEN Tras el extraordinario desarrollo de la Web durante los últimos quince años, ingenieros y desarrolladores empujan Internet hacia su siguiente frontera. Una nueva concepción en la computación y la comunicación a través de las redes ha estado floreciendo durante la última década; un mundo donde la mayoría de los ordenadores del futuro serán extremadamente reducidas de tamaño, hasta el punto que parecerán polvo en sus más avanzado prototipos. En esta visión, cada uno de los elementos de nuestro mundo “real” tiene una etiqueta inteligente que porta sus datos relevantes, mapeando de manera efectiva el mundo “real” en uno “virtual”, donde todos los objetos electrónicamente aumentados están presentes, pueden interactuar entre ellos e influenciar con su comportamiento el de los otros, o incluso el comportamiento del usuario final humano. Ésta es la visión del Internet del Futuro, que también toma ideas de varias tendencias nuevas en las ciencias de la computación y las redes de ordenadores, como la computación omnipresente y el Internet de las Cosas. Como ha sucedido antes, materializar un nuevo paradigma que cambia la manera en que las entidades se interrelacionan en este nuevo entorno ha demostrado ser una meta llena de retos en el camino. Ahora mismo la situación es emocionante, con una plétora de nuevos desarrollos, propuestas y modelos brotando todo el rato, a menudo de una manera descoordinada y descentralizada lejos de cualquier estandarización, recordando de alguna manera el estado de cosas de los primeros desarrollos de redes de ordenadores avanzadas, allá por los años 60 y 70. Normalmente, un sistema diseñado con el Internet del futuro como modelo consistirá en uno o varios dispositivos para usuario final sujetos a estos usuarios finales, una red –a menudo, una red de sensores inalámbricos- encargada de recolectar datos para los dispositivos de usuario final, y a veces una estación base enviando los datos para su consiguiente procesado en ordenadores menos limitados en hardware. Al implementar un sistema diseñado con el Internet del futuro como patrón, los problemas, y más específicamente, las limitaciones que deben enfrentarse son numerosas: falta de estándares para plataformas y protocolos, cuellos de botella en el procesado, bajo tiempo de vida de las baterías, etc. Uno de los principales objetivos de este Proyecto Fin de Carrera es presentar un modelo funcional de cómo trabaja un sistema basado en los paradigmas relacionados al Internet del futuro, superando algunas de las dificultades que pueden esperarse y mostrando un modelo de una arquitectura middleware específicamente diseñado para un sistema omnipresente y ubicuo. Este Proyecto Fin de Carrera está dividido en varias partes. Empezando por una introducción a los principales temas y conceptos de este modelo, un estado del arte es ofrecido para proveer un trasfondo tecnológico. Después de eso, se muestra un ejemplo de middleware semántico orientado a servicios; después, se desarrolla un sistema construido por medio de este middleware semántico orientado a servicios, justificando su localización en un escenario particular, describiéndolo y analizando los datos obtenidos de él. Finalmente, las conclusiones extraídas de este sistema y las futuras tareas que sería bueno tratar también son mencionadas.
Resumo:
Este trabajo consiste en la elaboración de un proyecto de investigación, orientado al estudio del Internet de las Cosas y los riesgos que presenta para la privacidad. En los últimos años se han puesto en marcha numerosos proyectos y se han realizado grandes avances tecnológicos con el fin de hacer del Internet de las Cosas una realidad, sin embargo aspectos críticos como la seguridad y la privacidad todavía no están completamente solucionados. El objetivo de este Trabajo Fin de Master es realizar un análisis en profundidad del Internet del Futuro, ampliando los conocimientos adquiridos durante el Máster, estudiando paso a paso los fundamentos sobre los que se asienta y reflexionando acerca de los retos a los que se enfrenta y el efecto que puede tener su implantación para la privacidad. El trabajo se compone de 14 capítulos estructurados en 4 partes. Una primera parte de introducción en la que se explican los conceptos del Internet de las Cosas y la computación ubicua, como preámbulo a las siguientes secciones. Posteriormente, en la segunda parte, se analizan los aspectos tecnológicos y relativos a la estandarización de esta nueva red. En la tercera parte se presentan los principales proyectos de investigación que existen actualmente y las diferentes áreas de aplicación que tiene el Internet del Futuro. Y por último, en la cuarta parte, se realiza un análisis del concepto de privacidad y se estudian, mediante diferentes escenarios de aplicación, los riesgos que puede suponer para la privacidad la implantación del Internet de las Cosas. This paper consists of the preparation of a research project aimed to study the Internet of Things and the risks it poses to privacy. In recent years many projects have been launched and new technologies have been developed to make the Internet of Things a reality; however, critical issues such as security and privacy are not yet completely solved. The purpose of this project is to make a rigorous analysis of the Future Internet, increasing the knowledge acquired during the Masters, studying step by step the basis on which the Internet of Things is founded, and reflecting on the challenges it faces and the effects it can have on privacy. The project consists of 14 chapters structured in four parts. The first part consists of an introduction which explains the concepts of the Internet of Things and ubiquitous computing as a preamble to the next parts. Then, in the second part, technological and standardization issues of this new network are studied. The third part presents the main research projects and Internet of Things application areas. And finally, the fourth part includes an analysis of the privacy concept and also an evaluation of the risks the Internet of Things poses to privacy. These are examined through various application scenarios.
Resumo:
Performance studies of actual parallel systems usually tend to concéntrate on the effectiveness of a given implementation. This is often done in the absolute, without quantitave reference to the potential parallelism contained in the programs from the point of view of the execution paradigm. We feel that studying the parallelism inherent to the programs is interesting, as it gives information about the best possible behavior of any implementation and thus allows contrasting the results obtained. We propose a method for obtaining ideal speedups for programs through a combination of sequential or parallel execution and simulation, and the algorithms that allow implementing the method. Our approach is novel and, we argüe, more accurate than previously proposed methods, in that a crucial part of the data - the execution times of tasks - is obtained from actual executions, while speedup is computed by simulation. This allows obtaining speedup (and other) data under controlled and ideal assumptions regarding issues such as number of processor, scheduling algorithm and overheads, etc. The results obtained can be used for example to evalúate the ideal parallelism that a program contains for a given model of execution and to compare such "perfect" parallelism to that obtained by a given implementation of that model. We also present a tool, IDRA, which implements the proposed method, and results obtained with IDRA for benchmark programs, which are then compared with those obtained in actual executions on real parallel systems.
Resumo:
Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.
Resumo:
Con el surgir de los problemas irresolubles de forma eficiente en tiempo polinomial en base al dato de entrada, surge la Computación Natural como alternativa a la computación clásica. En esta disciplina se trata de o bien utilizar la naturaleza como base de cómputo o bien, simular su comportamiento para obtener mejores soluciones a los problemas que los encontrados por la computación clásica. Dentro de la computación natural, y como una representación a nivel celular, surge la Computación con Membranas. La primera abstracción de las membranas que se encuentran en las células, da como resultado los P sistemas de transición. Estos sistemas, que podrían ser implementados en medios biológicos o electrónicos, son la base de estudio de esta Tesis. En primer lugar, se estudian las implementaciones que se han realizado, con el fin de centrarse en las implementaciones distribuidas, que son las que pueden aprovechar las características intrínsecas de paralelismo y no determinismo. Tras un correcto estudio del estado actual de las distintas etapas que engloban a la evolución del sistema, se concluye con que las distribuciones que buscan un equilibrio entre las dos etapas (aplicación y comunicación), son las que mejores resultados presentan. Para definir estas distribuciones, es necesario definir completamente el sistema, y cada una de las partes que influyen en su transición. Además de los trabajos de otros investigadores, y junto a ellos, se realizan variaciones a los proxies y arquitecturas de distribución, para tener completamente definidos el comportamiento dinámico de los P sistemas. A partir del conocimiento estático –configuración inicial– del P sistema, se pueden realizar distribuciones de membranas en los procesadores de un clúster para obtener buenos tiempos de evolución, con el fin de que la computación del P sistema sea realizada en el menor tiempo posible. Para realizar estas distribuciones, hay que tener presente las arquitecturas –o forma de conexión– de los procesadores del clúster. La existencia de 4 arquitecturas, hace que el proceso de distribución sea dependiente de la arquitectura a utilizar, y por tanto, aunque con significativas semejanzas, los algoritmos de distribución deben ser realizados también 4 veces. Aunque los propulsores de las arquitecturas han estudiado el tiempo óptimo de cada arquitectura, la inexistencia de distribuciones para estas arquitecturas ha llevado a que en esta Tesis se probaran las 4, hasta que sea posible determinar que en la práctica, ocurre lo mismo que en los estudios teóricos. Para realizar la distribución, no existe ningún algoritmo determinista que consiga una distribución que satisfaga las necesidades de la arquitectura para cualquier P sistema. Por ello, debido a la complejidad de dicho problema, se propone el uso de metaheurísticas de Computación Natural. En primer lugar, se propone utilizar Algoritmos Genéticos, ya que es posible realizar alguna distribución, y basada en la premisa de que con la evolución, los individuos mejoran, con la evolución de dichos algoritmos, las distribuciones también mejorarán obteniéndose tiempos cercanos al óptimo teórico. Para las arquitecturas que preservan la topología arbórea del P sistema, han sido necesarias realizar nuevas representaciones, y nuevos algoritmos de cruzamiento y mutación. A partir de un estudio más detallado de las membranas y las comunicaciones entre procesadores, se ha comprobado que los tiempos totales que se han utilizado para la distribución pueden ser mejorados e individualizados para cada membrana. Así, se han probado los mismos algoritmos, obteniendo otras distribuciones que mejoran los tiempos. De igual forma, se han planteado el uso de Optimización por Enjambres de Partículas y Evolución Gramatical con reescritura de gramáticas (variante de Evolución Gramatical que se presenta en esta Tesis), para resolver el mismo cometido, obteniendo otro tipo de distribuciones, y pudiendo realizar una comparativa de las arquitecturas. Por último, el uso de estimadores para el tiempo de aplicación y comunicación, y las variaciones en la topología de árbol de membranas que pueden producirse de forma no determinista con la evolución del P sistema, hace que se deba de monitorizar el mismo, y en caso necesario, realizar redistribuciones de membranas en procesadores, para seguir obteniendo tiempos de evolución razonables. Se explica, cómo, cuándo y dónde se deben realizar estas modificaciones y redistribuciones; y cómo es posible realizar este recálculo. Abstract Natural Computing is becoming a useful alternative to classical computational models since it its able to solve, in an efficient way, hard problems in polynomial time. This discipline is based on biological behaviour of living organisms, using nature as a basis of computation or simulating nature behaviour to obtain better solutions to problems solved by the classical computational models. Membrane Computing is a sub discipline of Natural Computing in which only the cellular representation and behaviour of nature is taken into account. Transition P Systems are the first abstract representation of membranes belonging to cells. These systems, which can be implemented in biological organisms or in electronic devices, are the main topic studied in this thesis. Implementations developed in this field so far have been studied, just to focus on distributed implementations. Such distributions are really important since they can exploit the intrinsic parallelism and non-determinism behaviour of living cells, only membranes in this case study. After a detailed survey of the current state of the art of membranes evolution and proposed algorithms, this work concludes that best results are obtained using an equal assignment of communication and rules application inside the Transition P System architecture. In order to define such optimal distribution, it is necessary to fully define the system, and each one of the elements that influence in its transition. Some changes have been made in the work of other authors: load distribution architectures, proxies definition, etc., in order to completely define the dynamic behaviour of the Transition P System. Starting from the static representation –initial configuration– of the Transition P System, distributions of membranes in several physical processors of a cluster is algorithmically done in order to get a better performance of evolution so that the computational complexity of the Transition P System is done in less time as possible. To build these distributions, the cluster architecture –or connection links– must be considered. The existence of 4 architectures, makes that the process of distribution depends on the chosen architecture, and therefore, although with significant similarities, the distribution algorithms must be implemented 4 times. Authors who proposed such architectures have studied the optimal time of each one. The non existence of membrane distributions for these architectures has led us to implement a dynamic distribution for the 4. Simulations performed in this work fix with the theoretical studies. There is not any deterministic algorithm that gets a distribution that meets the needs of the architecture for any Transition P System. Therefore, due to the complexity of the problem, the use of meta-heuristics of Natural Computing is proposed. First, Genetic Algorithm heuristic is proposed since it is possible to make a distribution based on the premise that along with evolution the individuals improve, and with the improvement of these individuals, also distributions enhance, obtaining complexity times close to theoretical optimum time. For architectures that preserve the tree topology of the Transition P System, it has been necessary to make new representations of individuals and new algorithms of crossover and mutation operations. From a more detailed study of the membranes and the communications among processors, it has been proof that the total time used for the distribution can be improved and individualized for each membrane. Thus, the same algorithms have been tested, obtaining other distributions that improve the complexity time. In the same way, using Particle Swarm Optimization and Grammatical Evolution by rewriting grammars (Grammatical Evolution variant presented in this thesis), to solve the same distribution task. New types of distributions have been obtained, and a comparison of such genetic and particle architectures has been done. Finally, the use of estimators for the time of rules application and communication, and variations in tree topology of membranes that can occur in a non-deterministic way with evolution of the Transition P System, has been done to monitor the system, and if necessary, perform a membrane redistribution on processors to obtain reasonable evolution time. How, when and where to make these changes and redistributions, and how it can perform this recalculation, is explained.
Resumo:
Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.