27 resultados para distributed computing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La computación distribuida ha estado presente desde hace unos cuantos años, pero es quizás en la actualidad cuando está contando con una mayor repercusión. En los últimos años el modelo de computación en la nube (Cloud computing) ha ganado mucha popularidad, prueba de ello es la cantidad de productos existentes. Todo sistema informático requiere ser controlado a través de sistemas de monitorización que permiten conocer el estado del mismo, de tal manera que pueda ser gestionado fácilmente. Hoy en día la mayoría de los productos de monitorización existentes limitan a la hora de visualizar una representación real de la arquitectura de los sistemas a monitorizar, lo que puede dificultar la tarea de los administradores. Es decir, la visualización que proporcionan de la arquitectura del sistema, en muchos casos se ve influenciada por el diseño del sistema de visualización, lo que impide ver los niveles de la arquitectura y las relaciones entre estos. En este trabajo se presenta un sistema de monitorización para sistemas distribuidos o Cloud, que pretende dar solución a esta problemática, no limitando la representación de la arquitectura del sistema a monitorizar. El sistema está formado por: agentes, que se encargan de la tarea de recolección de las métricas del sistema monitorizado; un servidor, al que los agentes le envían las métricas para que las almacenen en una base de datos; y una aplicación web, a través de la que se visualiza toda la información. El sistema ha sido probado satisfactoriamente con la monitorización de CumuloNimbo, una plataforma como servicio (PaaS), que ofrece interfaz SQL y procesamiento transaccional altamente escalable sobre almacenes clave valor. Este trabajo describe la arquitectura del sistema de monitorización, y en concreto, el desarrollo de la principal contribución al sistema, la aplicación web. ---ABSTRACT---Distributed computing has been around for quite a long time, but now it is becoming more and more important. In the last few years, cloud computing, a branch of distributed computing has become very popular, as its different products in the market can prove. Every computing system requires to be controlled through monitoring systems to keep them functioning correctly. Currently, most of the monitoring systems in the market only provide a view of the architectures of the systems monitored, which in most cases do not permit having a real view of the system. This lack of vision can make administrators’ tasks really difficult. If they do not know the architecture perfectly, controlling the system based on the view that the monitoring system provides is extremely complicated. The project introduces a new monitoring system for distributed or Cloud systems, which shows the real architecture of the system. This new system is composed of several elements: agents, which collect the metrics of the monitored system; a server, which receives the metrics from the agents and saves them in a database; and a web application, which shows all the data collected in an easy way. The monitoring system has been tested successfully with Cumulonimbo. CumuloNimbo is a platform as a service (PaaS) which offers an SQL interface and a high-scalable transactional process. This platform works over key-value storage. This project describes the architecture of the monitoring system, especially, the development of the web application, which is the main contribution to the system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el trabajo que aquí presentamos se incluye la base teórica (sintaxis y semántica) y una implementación de un framework para codificar el razonamiento de la representación difusa o borrosa del mundo (tal y como nosotros, seres humanos, entendemos éste). El interés en la realización de éste trabajo parte de dos fuentes: eliminar la complejidad existente cuando se realiza una implementación con un lenguaje de programación de los llamados de propósito general y proporcionar una herramienta lo suficientemente inteligente para dar respuestas de forma constructiva a consultas difusas o borrosas. El framework, RFuzzy, permite codificar reglas y consultas en una sintaxis muy cercana al lenguaje natural usado por los seres humanos para expresar sus pensamientos, pero es bastante más que eso. Permite representar conceptos muy interesantes, como fuzzificaciones (funciones usadas para convertir conceptos no difusos en difusos), valores por defecto (que se usan para devolver resultados un poco menos válidos que los que devolveríamos si tuviésemos la información necesaria para calcular los más válidos), similaridad entre atributos (característica que utilizamos para buscar aquellos individuos en la base de datos con una característica similar a la buscada), sinónimos o antónimos y, además, nos permite extender el numero de conectivas y modificadores (incluyendo modificadores de negación) que podemos usar en las reglas y consultas. La personalización de la definición de conceptos difusos (muy útil para lidiar con el carácter subjetivo de los conceptos borrosos, donde nos encontramos con que cualificar a alguien de “alto” depende de la altura de la persona que cualifica) es otra de las facilidades incluida. Además, RFuzzy implementa la semántica multi-adjunta. El interés en esta reside en que introduce la posibilidad de obtener la credibilidad de una regla a partir de un conjunto de datos y una regla dada y no solo el grado de satisfacción de una regla a partir de el universo modelado en nuestro programa. De esa forma podemos obtener automáticamente la credibilidad de una regla para una determinada situación. Aún cuando la contribución teórica de la tesis es interesante en si misma, especialmente la inclusión del modificador de negacion, sus multiples usos practicos lo son también. Entre los diferentes usos que se han dado al framework destacamos el reconocimiento de emociones, el control de robots, el control granular en computacion paralela/distribuída y las busquedas difusas o borrosas en bases de datos. ABSTRACT In this work we provide a theoretical basis (syntax and semantics) and a practical implementation of a framework for encoding the reasoning and the fuzzy representation of the world (as human beings understand it). The interest for this work comes from two sources: removing the existing complexity when doing it with a general purpose programming language (one developed without focusing in providing special constructions for representing fuzzy information) and providing a tool intelligent enough to answer, in a constructive way, expressive queries over conventional data. The framework, RFuzzy, allows to encode rules and queries in a syntax very close to the natural language used by human beings to express their thoughts, but it is more than that. It allows to encode very interesting concepts, as fuzzifications (functions to easily fuzzify crisp concepts), default values (used for providing results less adequate but still valid when the information needed to provide results is missing), similarity between attributes (used to search for individuals with a characteristic similar to the one we are looking for), synonyms or antonyms and it allows to extend the number of connectives and modifiers (even negation) we can use in the rules. The personalization of the definition of fuzzy concepts (very useful for dealing with the subjective character of fuzziness, in which a concept like tall depends on the height of the person performing the query) is another of the facilities included. Besides, RFuzzy implements the multi-adjoint semantics. The interest in them is that in addition to obtaining the grade of satisfaction of a consequent from a rule, its credibility and the grade of satisfaction of the antecedents we can determine from a set of data how much credibility we must assign to a rule to model the behaviour of the set of data. So, we can determine automatically the credibility of a rule for a particular situation. Although the theoretical contribution is interesting by itself, specially the inclusion of the negation modifier, the practical usage of it is equally important. Between the different uses given to the framework we highlight emotion recognition, robocup control, granularity control in parallel/distributed computing and flexible searches in databases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los resultados presentados en la memoria de esta tesis doctoral se enmarcan en la denominada computación celular con membranas una nueva rama de investigación dentro de la computación natural creada por Gh. Paun en 1998, de ahí que habitualmente reciba el nombre de sistemas P. Este nuevo modelo de cómputo distribuido está inspirado en la estructura y funcionamiento de la célula. El objetivo de esta tesis ha sido analizar el poder y la eficiencia computacional de estos sistemas de computación celular. En concreto, se han analizado dos tipos de sistemas P: por un lado los sistemas P de neuronas de impulsos, y por otro los sistemas P con proteínas en las membranas. Para el primer tipo, los resultados obtenidos demuestran que es posible que estos sistemas mantengan su universalidad aunque muchas de sus características se limiten o incluso se eliminen. Para el segundo tipo, se analiza la eficiencia computacional y se demuestra que son capaces de resolver problemas de la clase de complejidad ESPACIO-P (PSPACE) en tiempo polinómico. Análisis del poder computacional: Los sistemas P de neuronas de impulsos (en adelante SN P, acrónimo procedente del inglés «Spiking Neural P Systems») son sistemas inspirados en el funcionamiento neuronal y en la forma en la que los impulsos se propagan por las redes sinápticas. Los SN P bio-inpirados poseen un numeroso abanico de características que ha cen que dichos sistemas sean universales y por tanto equivalentes, en poder computacional, a una máquina de Turing. Estos sistemas son potentes a nivel computacional, pero tal y como se definen incorporan numerosas características, quizás demasiadas. En (Ibarra et al. 2007) se demostró que en estos sistemas sus funcionalidades podrían ser limitadas sin comprometer su universalidad. Los resultados presentados en esta memoria son continuistas con la línea de trabajo de (Ibarra et al. 2007) y aportan nuevas formas normales. Esto es, nuevas variantes simplificadas de los sistemas SN P con un conjunto mínimo de funcionalidades pero que mantienen su poder computacional universal. Análisis de la eficiencia computacional: En esta tesis se ha estudiado la eficiencia computacional de los denominados sistemas P con proteínas en las membranas. Se muestra que este modelo de cómputo es equivalente a las máquinas de acceso aleatorio paralelas (PRAM) o a las máquinas de Turing alterantes ya que se demuestra que un sistema P con proteínas, es capaz de resolver un problema ESPACIOP-Completo como el QSAT(problema de satisfacibilidad de fórmulas lógicas cuantificado) en tiempo polinómico. Esta variante de sistemas P con proteínas es muy eficiente gracias al poder de las proteínas a la hora de catalizar los procesos de comunicación intercelulares. ABSTRACT The results presented at this thesis belong to membrane computing a new research branch inside of Natural computing. This new branch was created by Gh. Paun on 1998, hence usually receives the name of P Systems. This new distributed computing model is inspired on structure and functioning of cell. The aim of this thesis is to analyze the efficiency and computational power of these computational cellular systems. Specifically there have been analyzed two different classes of P systems. On the one hand it has been analyzed the Neural Spiking P Systems, and on the other hand it has been analyzed the P systems with proteins on membranes. For the first class it is shown that it is possible to reduce or restrict the characteristics of these kind of systems without loss of computational power. For the second class it is analyzed the computational efficiency solving on polynomial time PSACE problems. Computational Power Analysis: The spiking neural P systems (SN P in short) are systems inspired by the way of neural cells operate sending spikes through the synaptic networks. The bio-inspired SN Ps possess a large range of features that make these systems to be universal and therefore equivalent in computational power to a Turing machine. Such systems are computationally powerful, but by definition they incorporate a lot of features, perhaps too much. In (Ibarra et al. in 2007) it was shown that their functionality may be limited without compromising its universality. The results presented herein continue the (Ibarra et al. 2007) line of work providing new formal forms. That is, new SN P simplified variants with a minimum set of functionalities but keeping the universal computational power. Computational Efficiency Analisys: In this thesis we study the computational efficiency of P systems with proteins on membranes. We show that this computational model is equivalent to parallel random access machine (PRAM) or alternating Turing machine because, we show P Systems with proteins can solve a PSPACE-Complete problem as QSAT (Quantified Propositional Satisfiability Problem) on polynomial time. This variant of P Systems with proteins is very efficient thanks to computational power of proteins to catalyze inter-cellular communication processes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract. Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of diferent non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that efect. This tool is also extended to study the efect of lesions on the whole performance of our model nets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many data streaming applications produces massive amounts of data that must be processed in a distributed fashion due to the resource limitation of a single machine. We propose a distributed data stream clustering protocol. Theoretical analysis shows preliminary results about the quality of discovered clustering. In addition, we present results about the ability to reduce the time complexity respect to the centralized approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.