60 resultados para computing systems design
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.
Resumo:
This paper proposes a new methodology focused on implementing cost effective architectures on Cloud Computing systems. With this methodology the paper presents some disadvantages of systems that are based on single Cloud architectures and gives some advices for taking into account in the development of hybrid systems. The work also includes a validation of these ideas implemented in a complete videoconference service developed with our research group. This service allows a great number of users per conference, multiple simultaneous conferences, different client software (requiring transcodification of audio and video flows) and provides a service like automatic recording. Furthermore it offers different kinds of connectivity including SIP clients and a client based on Web 2.0. The ideas proposed in this article are intended to be a useful resource for any researcher or developer who wants to implement cost effective systems on several Clouds
Resumo:
In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.
Resumo:
High flux and high CRI may be achieved by combining different chips and/or phosphors. This, however, results in inhomogeneous sources that, when combined with collimating optics, typically produce patterns with undesired artifacts. These may be a combination of spatial, angular or color non-uniformities. In order to avoid these effects, there is a need to mix the light source, both spatially and angularly. Diffusers can achieve this effect, but they also increase the etendue (and reduce the brightness) of the resulting source, leading to optical systems of increased size and wider emission angles. The shell mixer is an optic comprised of many lenses on a shell covering the source. These lenses perform Kohler integration to mix the emitted light, both spatially and angularly. Placing it on top of a multi-chip Lambertian light source, the result is a highly homogeneous virtual source (i.e, spatially and angularly mixed), also Lambertian, which is located in the same position with essentially the same size (so the average brightness is not increased). This virtual light source can then be collimated using another optic, resulting in a homogeneous pattern without color separation. Experimental measurements have shown optical efficiency of the shell of 94%, and highly homogeneous angular intensity distribution of collimated beams, in good agreement with the ray-tracing simulations.
Resumo:
Two quasi-aplanatic free-form solid V-groove collimators are presented in this work. Both optical designs are originally designed using the Simultaneous Multiple Surface method in three dimensions (SMS 3D). The second optically active surface in both free-form V-groove devices is designed a posteriori as a grooved surface. First two mirror (XX) design is designed in order to clearly show the design procedure and working principle of these devices. Second, RXI free-form design is comparable with existing RXI collimators; it is a compact and highly efficient design made of polycarbonate (PC) performing very good colour mixing of the RGGB LED sources placed off-axis. There have been presented rotationally symmetric non-aplanatic high efficiency collimators with colour mixing property to be improved and rotationally symmetric aplanatic devices with good colour mixing property and efficiency to be improved. The aim of this work was to design a free-form device in order to improve colour mixing property of the rotationally symmetric nonaplanatic RXI devices and the efficiency of the aplanatic ones.
Resumo:
The previous publications (Miñano et al, 2011) have shown that using a Spherical Geodesic Waveguide (SGW), it can be achieved the super-resolution up to ? /500 close to a set of discrete frequencies. These frequencies are directly connected with the well-known Schumann resonance frequencies of spherical symmetric systems. However, the Spherical Geodesic Waveguide (SGW) has been presented as an ideal system, in which the technological obstacles or manufacturing feasibility and their influence on final results were not taken into account. In order to prove the concept of superresolution experimentally, the Spherical Geodesic Waveguide is modified according to the manufacturing requirements and technological limitations. Each manufacturing process imposes some imperfections which can affect the experimental results. Here, we analyze the influence of the manufacturing limitations on the super-resolution properties of the SGW. Beside the theoretical work, herein, there has been presented the experimental results, as well.
Resumo:
In this work, novel imaging designs with a single optical surface (either refractive or reflective) are presented. In some of these designs, both object and image shapes are given but mapping from object to image is obtained as a result of the design. In other designs, not only the mapping is obtained in the design process, but also the shape of the object is found. In the examples considered, the image is virtual and located at infinity and is seen from known pupil, which can emulate a human eye. In the first introductory part, 2D designs have been done using three different design methods: a SMS design, a compound Cartesian oval surface, and a differential equation method for the limit case of small pupil. At the point-size pupil limit, it is proven that these three methods coincide. In the second part, previous 2D designs are extended to 3D by rotation and the astigmatism of the image has been studied. As an advanced variation, the differential equation method is used to provide the freedom to control the tangential rays and sagittal rays simultaneously. As a result, designs without astigmatism (at the small pupil limit) on a curved object surface have been obtained. Finally, this anastigmatic differential equation method has been extended to 3D for the general case, in which freeform surfaces are designed.
Resumo:
Negative Refractive Lens (NRL) has shown that an optical system can produce images with details below the classic Abbe diffraction limit. This optical system transmits the electromagnetic fields, emitted by an object plane, towards an image plane producing the same field distribution in both planes. In particular, a Dirac delta electric field in the object plane is focused without diffraction limit to the Dirac delta electric field in the image plane. Two devices with positive refraction, the Maxwell Fish Eye lens (MFE) and the Spherical Geodesic Waveguide (SGW) have been claimed to break the diffraction limit using positive refraction with a different meaning. In these cases, it has been considered the power transmission from a point source to a point receptor, which falls drastically when the receptor is displaced from the focus by a distance much smaller than the wavelength. Although these systems can detect displacements up to ?/3000, they cannot be compared to the NRL, since the concept of image is different. The SGW deals only with point source and drain, while in the case of the NRL, there is an object and an image surface. Here, it is presented an analysis of the SGW with defined object and image surfaces (both are conical surfaces), similarly as in the case of the NRL. The results show that a Dirac delta electric field on the object surface produces an image below the diffraction limit on the image surface.
Resumo:
Aplanatic designs present great interest in the optics field since they are free from spherical aberration and linear coma at the axial direction. Nevertheless nowadays it cannot be found on literature any thin aplanatic design based on a lens. This work presents the first aplanatic thin lens (in this case a dome-shaped faceted TIR lens performing light collimation), designed for LED illumination applications. This device, due to its TIR structure (defined as an anomalous microstructure as we will see) presents good color-mixing properties. We will show this by means of raytrace simulations, as well as high optical efficiency.
Resumo:
LEDs are substituting fluorescent and incandescent bulbs as illumination sources due to their low power consumption and long lifetime. Visible Light Communications (VLC) makes use of the LEDs short switching times to transmit information. Although LEDs switching speed is around Mbps range, higher speeds (hundred of Mbps) can be reached by using high bandwidth-efficiency modulation techniques. However, the use of these techniques requires a more complex driver which elevates drastically its power consumption. In this work an energy efficiency analysis of the different VLC modulation techniques and drivers is presented. Besides, the design of new schemes of VLC drivers is described.
Resumo:
Modern object oriented languages like C# and JAVA enable developers to build complex application in less time. These languages are based on selecting heap allocated pass-by-reference objects for user defined data structures. This simplifies programming by automatically managing memory allocation and deallocation in conjunction with automated garbage collection. This simplification of programming comes at the cost of performance. Using pass-by-reference objects instead of lighter weight pass-by value structs can have memory impact in some cases. These costs can be critical when these application runs on limited resource environments such as mobile devices and cloud computing systems. We explore the problem by using the simple and uniform memory model to improve the performance. In this work we address this problem by providing an automated and sounds static conversion analysis which identifies if a by reference type can be safely converted to a by value type where the conversion may result in performance improvements. This works focus on C# programs. Our approach is based on a combination of syntactic and semantic checks to identify classes that are safe to convert. We evaluate the effectiveness of our work in identifying convertible types and impact of this transformation. The result shows that the transformation of reference type to value type can have substantial performance impact in practice. In our case studies we optimize the performance in Barnes-Hut program which shows total memory allocation decreased by 93% and execution time also reduced by 15%.
Resumo:
There is an increasing interest in the intersection of human-computer interaction and public policy. This day-long workshop will examine successes and challenges related to public policy and human computer interaction, in order to provide a forum to create a baseline of examples and to start the process of writing a white paper on the topic.
Resumo:
Las cargas de origen térmico causadas por las acciones medioambientales generan esfuerzos apreciables en estructuras hiperestáticas masivas, como es el caso de las presas bóvedas. Ciertas investigaciones apuntan que la variación de la temperatura ambiental es la segunda causa de reparaciones en las presas del hormigón en servicio. Del mismo modo, es una causa de fisuración en un porcentaje apreciable de casos. Las presas son infraestructuras singulares por sus dimensiones, su vida útil, su impacto sobre el territorio y por el riesgo que implica su presencia. La evaluación de ese riesgo requiere, entre otras herramientas, de modelos matemáticos de predicción del comportamiento. Los modelos han de reproducir la realidad del modo más fidedigno posible. Además, en un escenario de posible cambio climático en el que se prevé un aumento de las temperaturas medias, la sociedad ha de conocer cuál será el comportamiento estructural de las infraestructuras sensibles en los futuros escenarios climáticos. No obstante, existen escasos estudios enfocados a determinar el campo de temperaturas de las presas de hormigón. Así, en esta investigación se han mejorado los modelos de cálculo térmico existentes con la incorporación de nuevos fenómenos físicos de transferencia de calor entre la estructura y el medio ambiente que la rodea. También se han propuesto nuevas metodologías más eficientes para cuantificar otros mecanismos de transferencia de calor. La nueva metodología se ha aplicado a un caso de estudio donde se disponía de un amplio registro de temperaturas de su hormigón. Se ha comprobado la calidad de las predicciones realizadas por los diversos modelos térmicos en el caso piloto. También se han comparado los resultados de los diversos modelos entre sí. Finalmente, se ha determinado las consecuencias de las predicciones de las temperaturas por algunos de los modelos térmicos sobre la respuesta estructural del caso de estudio. Los modelos térmicos se han empleado para caracterizar térmicamente las presas bóveda. Se ha estudiado el efecto de ciertas variables atmosféricas y determinados aspectos geométricos de las presas sobre su respuesta térmica. También se ha propuesto una metodología para evaluar la respuesta térmica y estructural de las infraestructuras frente a los posibles cambios meteorológicos inducidos por el cambio climático. La metodología se ha aplicado a un caso de estudio, una presa bóveda, y se ha obtenido su futura respuesta térmica y estructural frente a diversos escenarios climáticos. Frente a este posible cambio de las variables meteorológicas, se han detallado diversas medidas de adaptación y se ha propuesto una modificación de la normativa española de proyecto de presas en el punto acerca del cálculo de la distribución de temperaturas de diseño. Finalmente, se han extraído una serie de conclusiones y se han sugerido posibles futuras líneas de investigación para ampliar el conocimiento del fenómeno de la distribución de temperaturas en el interior de las presas y las consecuencias sobre su respuesta estructural. También se han propuesto futuras investigaciones para desarrollar nuevos procedimiento para definir las cargas térmicas de diseño, así como posibles medidas de adaptación frente al cambio climático. Thermal loads produced by external temperature variations may cause stresses in massive hyperstatic structures, such as arch dams. External temperature changes are pointed out as the second most major repairs in dams during operation. Moreover, cracking is caused by thermal loads in a quite number of cases. Dams are unique infrastructures given their dimensions, lifetime, spatial impacts and the risks involve by their presence. The risks are assessed by means of mathematical models which compute the behavior of the structure. The behavior has to be reproduced as reliable as possible. Moreover, since mean temperature on Earth is expected to increase, society has to know the structural behavior of sensitive structures to climate change. However, few studies have addressed the assessment of the thermal field in concrete dams. Thermal models are improved in this research. New heat transfer phenomena have been accounted for. Moreover, new and more efficient methodologies for computing other heat transfer phenomena have been proposed. The methodology has been applied to a case study where observations from thermometers embedded in the concrete were available. Recorded data were predicted by several thermal models and the quality of the predictions was assessed. Furthermore, predictions were compared between them. Finally, the consequences on the stress calculations were analyzed. Thermal models have been used to characterize arch dams from a thermal point of view. The effect of some meteorological and geometrical variables on the thermal response of the dam has been analyzed. Moreover, a methodology for assessing the impacts of global warming on the thermal and structural behavior of infrastructures has been proposed. The methodology was applied to a case study, an arch dam, and its thermal and structural response to several future climatic scenarios was computed. In addition, several adaptation strategies has been outlined and a new formulation for computing design thermal loads in concrete dams has been proposed. Finally, some conclusions have been reported and some future research works have been outlined. Future research works will increase the knowledge of the concrete thermal field and its consequences on the structural response of the infrastructures. Moreover, research works will develope a new procedure for computing the design thermal loads and will study some adaptation strategies against the climate change.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.