862 resultados para Deadlock Analysis, Distributed Systems, Concurrent Systems, Formal Languages
Resumo:
Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.
Resumo:
RESUMEN Las aplicaciones de los Sistemas de Información Geográfica (SIG) a la Arqueología, u otra disciplina humanística no son una novedad. La evolución de los mismos hacia sistemas distribuidos e interoperables, y estructuras donde las políticas de uso, compartido y coordinado de los datos sí lo son, estando todos estos aspectos contemplados en la Infraestructura de Datos Espaciales. INSPIRE es el máximo exponente europeo en cuestiones de iniciativa y marco legal en estos aspectos. La metodología arqueológica recopila y genera gran cantidad de datos, y entre los atributos o características intrínsecas están la posición y el tiempo, aspectos que tradicionalmente explotan los SIG. Los datos se catalogan, organizan, mantienen, comparten y publican, y los potenciales consumidores comienzan a tenerlos disponibles. Toda esta información almacenada de forma tradicional en fichas y posteriormente en bases de datos relacionadas alfanuméricas pueden ser considerados «metadatos» en muchos casos por contener información útil para más usuarios en los procesos de descubrimiento, y explotación de los datos. Además estos datos también suelen ir acompañados de información sobre ellos mismos, que describe su especificaciones, calidad, etc. Cotidianamente usamos los metadatos: ficha bibliográfica del libro o especificaciones de un ordenador. Pudiéndose definir como: «información descriptiva sobre el contexto, calidad, condición y características de un recurso, dato u objeto que tiene la finalidad de facilitar su recuperación, identificación,evaluación, preservación y/o interoperabilidad». En España existe una iniciativa para estandarizar la descripción de los metadatos de los conjuntos de datos geoespaciales: Núcleo Español de Metadatos (NEM), los mismos contienen elementos para la descripción de las particularidades de los datos geográficos, que incluye todos los registros obligatorios de la Norma ISO19115 y del estudio de metadatos Dublin Core, tradicionalmente usado en contextos de Biblioteconomía. Conscientes de la necesidad de los metadatos, para optimizar la búsqueda y recuperación de los datos, se pretende formalizar la documentación de los datos arqueológicos a partir de la utilización del NEM, consiguiendo así la interoperabilidad de la información arqueológica. SUMMARY The application of Geographical Information Systems (GIS) to Archaeology and other social sciences is not new. Their evolution towards inter-operating, distributed systems, and structures in which policies for shared and coordinated data use are, and all these aspects are included in the Spatial Data Infrastructure (SDI). INSPIRE is the main European exponent in matters related to initiative and legal frame. Archaeological methodology gathers and creates a great amount of data, and position and time, aspects traditionally exploited by GIS, are among the attributes or intrinsic characteristics. Data are catalogued, organised, maintained, shared and published, and potential consumers begin to have them at their disposal. All this information, traditionally stored as cards and later in relational alphanumeric databases may be considered «metadata» in many cases, as they contain information that is useful for more users in the processes of discovery and exploitation of data. Moreover, this data are often accompanied by information about themselves, describing its especifications, quality, etc. We use metadata very often: in a book’s bibliographical card, or in the description of the characteristics of a computer. They may be defined as «descriptive information regarding the context, quality, condition and characteristics of a resource, data or object with the purpose of facilitating is recuperation, identification, evaluation, preservation and / interoperability.» There is an initiative in Spain to standardise the description of metadata in sets of geo-spatial data: the Núcleo Español de Metadatos (Spanish Metadata Nucleus), which contains elements for the description of the particular characteristics of geographical data, includes all the obligatory registers from the ISO Norm 19115 and from the metadata study Dublin Core, traditionally used in library management. Being aware of the need of metadata, to optimise the search and retrieval of data, the objective is to formalise the documentation of archaeological data from the Núcleo Español de Metadatos (Spanish Metadata Nucleus), thus obtaining the interoperability of the archaeological information.
Resumo:
The concept of unreliable failure detector was introduced by Chandra and Toueg as a mechanism that provides information about process failures. This mechanism has been used to solve several agreement problems, such as the consensus problem. In this paper, algorithms that implement failure detectors in partially synchronous systems are presented. First two simple algorithms of the weakest class to solve the consensus problem, namely the Eventually Strong class (⋄S), are presented. While the first algorithm is wait-free, the second algorithm is f-resilient, where f is a known upper bound on the number of faulty processes. Both algorithms guarantee that, eventually, all the correct processes agree permanently on a common correct process, i.e. they also implement a failure detector of the class Omega (Ω). They are also shown to be optimal in terms of the number of communication links used forever. Additionally, a wait-free algorithm that implements a failure detector of the Eventually Perfect class (⋄P) is presented. This algorithm is shown to be optimal in terms of the number of bidirectional links used forever.
Resumo:
The present work covers the first validation efforts of the EVA Tracking System for the assessment of minimally invasive surgery (MIS) psychomotor skills. Instrument movements were recorded for 42 surgeons (4 expert, 22 residents, 16 novice medical students) and analyzed for a box trainer peg transfer task. Construct validation was established for 7/9 motion analysis parameters (MAPs). Concurrent validation was determined for 8/9 MAPs against the TrEndo Tracking System. Finally, automatic determination of surgical proficiency based on the MAPs was sought by 3 different approaches to supervised classification (LDA, SVM, ANFIS), with accuracy results of 61.9%, 83.3% and 80.9% respectively. Results not only reflect on the validation of EVA for skills? assessment, but also on the relevance of motion analysis of instruments in the determination of surgical competence.
Resumo:
La razón de este proyecto, es la de desarrollar el módulo de cursos de la plataforma de Massive Online Open Courses (MOOCs), CloudRoom. Dicho módulo está englobado en una arquitectura orientada a servicios (SOA) y en una infraestructura de Cloud Computing utilizando Amazon Web Services (AWS). Nuestro objetivo es el de diseñar un Software as a Service (SaaS) robusto con las cualidades que a un producto de este tipo se le estiman: alta disponibilidad, alto rendimiento, gran experiencia de usuario y gran extensibilidad del sistema. Para lograrlo, se llevará a cabo la integración de las últimas tendencias tecnológicas dentro del desarrollo de sistemas distribuidos como Neo4j, Node.JS, Servicios RESTful, CoffeeScript. Todo esto siguiendo un estrategia de desarrollo PLAN-DO-CHECK utilizando Scrum y prácticas de metodologías ágiles. ---ABSTRACT---The reason of this Project is to develop the courses‟ module of CloudRoom, a Massive Online Open Courses platform. This module is encapsulated in a service-oriented architecture (SOA) based on a Cloud Computing infrastructure built on Amazon Web Services (AWS). Our goal is to design a robust Software as a Service (SaaS) with the qualities that are estimated in a product of this type: high availability, high performance, great user experience and great extensibility of the system. In order to address this, we carry out the integration of the latest technology trends in the development of distributed systems: Neo4j, Node.JS, RESTful Services and CoffeeScript. All of this, following a development strategy PLAN-DO-CHECK, using Scrum and practices of agile methodologies.
Resumo:
There is an increasing tendency of turning the current power grid, essentially unaware of variations in electricity demand and scattered energy sources, into something capable of bringing a degree of intelligence by using tools strongly related to information and communication technologies, thus turning into the so-called Smart Grid. In fact, it could be considered that the Smart Grid is an extensive smart system that spreads throughout any area where power is required, providing a significant optimization in energy generation, storage and consumption. However, the information that must be treated to accomplish these tasks is challenging both in terms of complexity (semantic features, distributed systems, suitable hardware) and quantity (consumption data, generation data, forecasting functionalities, service reporting), since the different energy beneficiaries are prone to be heterogeneous, as the nature of their own activities is. This paper presents a proposal on how to deal with these issues by using a semantic middleware architecture that integrates different components focused on specific tasks, and how it is used to handle information at every level and satisfy end user requests.
Resumo:
The computational and cooling power demands of enterprise servers are increasing at an unsustainable rate. Understanding the relationship between computational power, temperature, leakage, and cooling power is crucial to enable energy-efficient operation at the server and data center levels. This paper develops empirical models to estimate the contributions of static and dynamic power consumption in enterprise servers for a wide range of workloads, and analyzes the interactions between temperature, leakage, and cooling power for various workload allocation policies. We propose a cooling management policy that minimizes the server energy consumption by setting the optimum fan speed during runtime. Our experimental results on a presently shipping enterprise server demonstrate that including leakage awareness in workload and cooling management provides additional energy savings without any impact on performance.
Resumo:
Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.
Resumo:
Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.
Resumo:
El paradigma de procesamiento de eventos CEP plantea la solución al reto del análisis de grandes cantidades de datos en tiempo real, como por ejemplo, monitorización de los valores de bolsa o el estado del tráfico de carreteras. En este paradigma los eventos recibidos deben procesarse sin almacenarse debido a que el volumen de datos es demasiado elevado y a las necesidades de baja latencia. Para ello se utilizan sistemas distribuidos con una alta escalabilidad, elevado throughput y baja latencia. Este tipo de sistemas son usualmente complejos y el tiempo de aprendizaje requerido para su uso es elevado. Sin embargo, muchos de estos sistemas carecen de un lenguaje declarativo de consultas en el que expresar la computación que se desea realizar sobre los eventos recibidos. En este trabajo se ha desarrollado un lenguaje declarativo de consultas similar a SQL y un compilador que realiza la traducción de este lenguaje al lenguaje nativo del sistema de procesamiento masivo de eventos. El lenguaje desarrollado en este trabajo es similar a SQL, con el que se encuentran familiarizados un gran número de desarrolladores y por tanto aprender este lenguaje no supondría un gran esfuerzo. Así el uso de este lenguaje logra reducir los errores en ejecución de la consulta desplegada sobre el sistema distribuido al tiempo que se abstrae al programador de los detalles de este sistema.---ABSTRACT---The complex event processing paradigm CEP has become the solution for high volume data analytics which demand scalability, high throughput, and low latency. Examples of applications which use this paradigm are financial processing or traffic monitoring. A distributed system is used to achieve the performance requisites. These same requisites force the distributed system not to store the events but to process them on the fly as they are received. These distributed systems are complex systems which require a considerably long time to learn and use. The majority of such distributed systems lack a declarative language in which to express the computation to perform over incoming events. In this work, a new SQL-like declarative language and a compiler have been developed. This compiler translates this new language to the distributed system native language. Due to its similarity with SQL a vast amount of developers who are already familiar with SQL will need little time to learn this language. Thus, this language reduces the execution failures at the time the programmer no longer needs to know every single detail of the underlying distributed system to submit a query.
Resumo:
The cis-regulatory systems that control developmental expression of two sea urchin genes have been subjected to detailed functional analysis. Both systems are modular in organization: specific, separable fragments of the cis-regulatory DNA each containing multiple transcription factor target sites execute particular regulatory subfunctions when associated with reporter genes and introduced into the embryo. The studies summarized here were carried out on the CyIIIa gene, expressed in the embryonic aboral ectoderm and on the Endo16 gene, expressed in the embryonic vegetal plate, archenteron, and then midgut. The regulatory systems of both genes include modules that control particular aspects of temporal and spatial expression, and in both the territorial boundaries of expression depend on a combination of negative and positive functions. In both genes different regulatory modules control early and late embryonic expression. Modular cis-regulatory organization is widespread in developmentally regulated genes, and we present a tabular summary that includes many examples from mouse and Drosophila. We regard cis-regulatory modules as units of developmental transcription control, and also of evolution, in the assembly of transcription control systems.
Resumo:
Os smart grids representam a nova geração dos sistemas elétricos de potência, combinando avanços em computação, sistemas de comunicação, processos distribuídos e inteligência artificial para prover novas funcionalidades quanto ao acompanhamento em tempo real da demanda e do consumo de energia elétrica, gerenciamento em larga escala de geradores distribuídos, entre outras, a partir de um sistema de controle distribuído sobre a rede elétrica. Esta estrutura modifica profundamente a maneira como se realiza o planejamento e a operação de sistemas elétricos nos dias de hoje, em especial os de distribuição, e há interessantes possibilidades de pesquisa e desenvolvimento possibilitada pela busca da implementação destas funcionalidades. Com esse cenário em vista, o presente trabalho utiliza uma abordagem baseada no uso de sistemas multiagentes para simular esse tipo de sistema de distribuição de energia elétrica, considerando opções de controle distintas. A utilização da tecnologia de sistemas multiagentes para a simulação é baseada na conceituação de smart grids como um sistema distribuído, algo também realizado nesse trabalho. Para validar a proposta, foram simuladas três funcionalidades esperadas dessas redes elétricas: classificação de cargas não-lineares; gerenciamento de perfil de tensão; e reconfiguração topológica com a finalidade de reduzir as perdas elétricas. Todas as modelagens e desenvolvimentos destes estudos estão aqui relatados. Por fim, o trabalho se propõe a identificar os sistemas multiagentes como uma tecnologia a ser empregada tanto para a pesquisa, quanto para implementação dessas redes elétricas.
Resumo:
No mercado global e digital, as empresas são desafiadas a encontrar caminhos inovadores para atender o aumento da pressão -competitiva. A competição é uma das formas de interação das organizações, além da colaboração e da cooperação. A cooperação e a colaboração apresentam formas de produzir conjuntamente aumentando o potencial de atendimento das empresas. Os desafios mais encontrados no mercado são: reduzir os custos, sempre assegurar qualidade e personalizar os produtos e serviços. Um fenômeno de negócios comum hoje é a terceirização da manufatura e da logística para fornecedores domésticos e estrangeiros e provedores de serviços. Essa terceirização provoca, intrinsecamente, um espalhamento geográfico da produção em novos centros que oferecem vantagens nos recursos energéticos, matérias primas e centros de produção de conhecimento. Essa terceirização pode ser realizada também nas formas de colaboração e cooperação. Para isso, as empresas necessitam estabelecer uma forma de confiança entre si. No conceito de empresa virtual, a confiança é amplamente discutida para atingir uma colaboração e/ou cooperação entre empresas. O objetivo deste trabalho é propor e modelar uma ferramenta que atenda as necessidades das empresas para colaboração e/ou cooperação entre elas, considerando suas necessidades de confiança. As empresas aqui são vistas como sistemas produtivos, com suas camadas de gerenciamento de negócios, de acordo com o padrão ANSI/ISA 95. Além disso, um tipo de interpretação da rede de Petri, chamada de rede de Petri produtiva é introduzida como ferramenta para descrever o processo produtivo realizado pelas empresas na forma de workflow. A modelagem dessa arquitetura do sistema produtivo utiliza técnicas de sistemas distribuídos, como a arquitetura orientada a serviços. Além disso, um dos enfoques é das necessidades para o desenvolvimento de novos produtos, que envolve o desafio de personalização. Testes foram realizados para avaliar a proposta de workflow com pessoas de diferentes níveis de conhecimento sobre os processos, sejam de manufatura, sejam de outras áreas. Já a arquitetura proposta foi submetida a um estudo analítico das hipóteses levantadas no ambiente colaborativo.
Resumo:
Despite the gargantuan stakes that mergers and acquisitions represent, global companies' success rate at integrating organizations has been dismal, incurring billions of dollars in lost shareholder value. International human resources' handling of the cultural integration process is the principal differentiator between success and failure. This Capstone project proposes a process for developing cultural integration mechanisms, known as glue technology, and provides a step-by-step process map for execution through four phases. During planning, the need for glue technology is defined. Through analysis, rewards systems are assessed, and a strategy is chosen. In implementation planning, unanimous executive commitment must be secured. Last is measurement based on the integration plan's objectives. Enabling mechanisms such as removing negative influencers, speed, and communication are discussed.
Resumo:
Integration is currently a key factor in intelligent transportation systems (ITS), especially because of the ever increasing service demands originating from the ITS industry and ITS users. The current ITS landscape is made up of multiple technologies that are tightly coupled, and its interoperability is extremely low, which limits ITS services generation. Given this fact, novel information technologies (IT) based on the service-oriented architecture (SOA) paradigm have begun to introduce new ways to address this problem. The SOA paradigm allows the construction of loosely coupled distributed systems that can help to integrate the heterogeneous systems that are part of ITS. In this paper, we focus on developing an SOA-based model for integrating information technologies (IT) into ITS to achieve ITS service delivery. To develop our model, the ITS technologies and services involved were identified, catalogued, and decoupled. In doing so, we applied our SOA-based model to integrate all of the ITS technologies and services, ranging from the lowest-level technical components, such as roadside unit as a service (RS S), to the most abstract ITS services that will be offered to ITS users (value-added services). To validate our model, a functionality case study that included all of the components of our model was designed.