999 resultados para scripting language


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project examines the current available work on the explicit and implicit parallelization of the R scripting language and reports on experimental findings for the development of a model for predicting effective points for automatic parallelization to be performed, based upon input data sizes and function complexity. After finding or creating a series of custom benchmarks, an interval based on data size and time complexity where replacement becomes a viable option was found; specifically between O(N) and O(N3) exclusive. As data size increases, the benefits of parallel processing become more apparent and a point is reached where those benefits outweigh the costs in memory transfer time. Based on our observations, this point can be predicted with a fair amount of accuracy using regression on a sample of approximately ten data sizes spread evenly between a system determined minimum and maximum size.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

VMSCRIPT is a scripting language designed to allow small programs to be compiled for a range of generated tiny virtual machines, suitable for sensor network devices. The VMSCRIPT compiler is an optimising compiler designed to allow quick re-targeting, based on a template, code rewriting model. A compiler backend can be specified at the same time as a virtual machine, with the compiler reading the specification and using it as a code generator.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interactive documents for use with the World Wide Web have been developed for viewing multi-dimensional radiographic and visual images of human anatomy, derived from the Visible Human Project. Emphasis has been placed on user-controlled features and selections. The purpose was to develop an interface which was independent of host operating system and browser software which would allow viewing of information by multiple users. The interfaces were implemented using HyperText Markup Language (HTML) forms, C programming language and Perl scripting language. Images were pre-processed using ANALYZE and stored on a Web server in CompuServe GIF format. Viewing options were included in the document design, such as interactive thresholding and two-dimensional slice direction. The interface is an example of what may be achieved using the World Wide Web. Key applications envisaged for such software include education, research and accessing of information through internal databases and simultaneous sharing of images by remote computers by health personnel for diagnostic purposes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Relatório da prática de ensino supervisionada, Mestrado em Ensino de Informática, Universidade de Lisboa, 2014

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is problematic to use standard ontology tools when describing vague domains. Standard ontologies are designed to formally define one view of a domain, and although it is possible to define disagreeing statements, it is not advisable, as the resulting inferences could be incorrect. Two different solutions to the above problem in two different vague domains have been developed and are presented. The first domain is the knowledge base of conversational agents (chatbots). An ontological scripting language has been designed to access ontology data from within chatbot code. The solution developed is based on reifications of user statements. It enables a new layer of logics based on the different views of the users, enabling the body of knowledge to grow automatically. The second domain is competencies and competency frameworks. An ontological framework has been developed to model different competencies using the emergent standards. It enables comparison of competencies using a mix of linguistic logics and descriptive logics. The comparison results are non-binary, therefore not simple yes and no answers, highlighting the vague nature of the comparisons. The solution has been developed with small ontologies which can be added to and modified in order for the competency user to build a total picture that fits the user’s purpose. Finally these two approaches are viewed in the light of how they could aid future work in vague domains, further work in both domains is described and also in other domains such as the semantic web. This demonstrates two different approaches to achieve inferences using standard ontology tools in vague domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tcl/Tk scripting language has become the de-facto standard for EDA tools. This paper explains how to start working with Tcl/Tk using simple examples. Two complete applications are presented to show in more detail the capabilities of the language. In one script average power consumption of a digital system is automated. A second script creates a virtual display driven by the simulation of a graphic card.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is focused on building and configuring a measurement test bench for non linear High Power Amplifiers, more precisely those ones based on the Envelope Elimination and Restoration. At first sight the test bench is composed of several arbitrary waveform generators, an oscilloscope, a vector signal generator and a spectrum analyzer all of them controlled remotely. The test bench works automatically, that is why several software control programs have been developed in order to control all this equipment. The control programs have been developed in Matlab/Octave Scripting language and at last chance in a more low level language as C. The signal processing algorithms, taking into account that the time alignment one is the most important, have been developed in Matlab/Octave Scripting too. An improvement of 10dB in the ACPR(Adjacent Channel Power Ratio) has been obtained just by applying the time alignment algorithm developed in this work

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The project arises from the need to develop improved teaching methodologies in field of the mechanics of continuous media. The objective is to offer the student a learning process to acquire the necessary theoretical knowledge, cognitive skills and the responsibility and autonomy to professional development in this area. Traditionally the teaching of the concepts of these subjects was performed through lectures and laboratory practice. During these lessons the students attitude was usually passive, and therefore their effectiveness was poor. The proposed methodology has already been successfully employed in universities like University Bochum, Germany, University the South Australia and aims to improve the effectiveness of knowledge acquisition through use by the student of a virtual laboratory. This laboratory allows to adapt the curricula and learning techniques to the European Higher Education and improve current learning processes in the University School of Public Works Engineers -EUITOP- of the Technical University of Madrid -UPM-, due there are not laboratories in this specialization. The virtual space is created using a software platform built on OpenSim, manages 3D virtual worlds, and, language LSL -Linden Scripting Language-, which imprints specific powers to objects. The student or user can access this virtual world through their avatar -your character in the virtual world- and can perform practices within the space created for the purpose, at any time, just with computer with internet access and viewfinder. The virtual laboratory has three partitions. The virtual meeting rooms, where the avatar can interact with peers, solve problems and exchange existing documentation in the virtual library. The interactive game room, where the avatar is has to resolve a number of issues in time. And the video room where students can watch instructional videos and receive group lessons. Each audiovisual interactive element is accompanied by explanations framing it within the area of knowledge and enables students to begin to acquire a vocabulary and practice of the profession for which they are being formed. Plane elasticity concepts are introduced from the tension and compression testing of test pieces of steel and concrete. The behavior of reticulated and articulated structures is reinforced by some interactive games and concepts of tension, compression, local and global buckling will by tests to break articulated structures. Pure bending concepts, simple and composite torsion will be studied by observing a flexible specimen. Earthquake resistant design of buildings will be checked by a laboratory test video.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.