860 resultados para Distributed Systems
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Carbon nanotubes rank amongst potential candidates for a new family of nanoscopic devices, in particular for sensing applications. At the same time that defects in carbon nanotubes act as binding sites for foreign species, our current level of control over the fabrication process does not allow one to specifically choose where these binding sites will actually be positioned. In this work we present a theoretical framework for accurately calculating the electronic and transport properties of long disordered carbon nanotubes containing a large number of binding sites randomly distributed along a sample. This method combines the accuracy and functionality of ab initio density functional theory to determine the electronic structure with a recursive Green`s functions method. We apply this methodology on the problem of nitrogen-rich carbon nanotubes, first considering different types of defects and then demonstrating how our simulations can help in the field of sensor design by allowing one to compute the transport properties of realistic nanotube devices containing a large number of randomly distributed binding sites.
Resumo:
This paper applies the concepts and methods of complex networks to the development of models and simulations of master-slave distributed real-time systems by introducing an upper bound in the allowable delivery time of the packets with computation results. Two representative interconnection models are taken into account: Uniformly random and scale free (Barabasi-Albert), including the presence of background traffic of packets. The obtained results include the identification of the uniformly random interconnectivity scheme as being largely more efficient than the scale-free counterpart. Also, increased latency tolerance of the application provides no help under congestion.
Resumo:
Usually, a Petri net is applied as an RFID model tool. This paper, otherwise, presents another approach to the Petri net concerning RFID systems. This approach, called elementary Petri net inside an RFID distributed database, or PNRD, is the first step to improve RFID and control systems integration, based on a formal data structure to identify and update the product state in real-time process execution, allowing automatic discovery of unexpected events during tag data capture. There are two main features in this approach: to use RFID tags as the object process expected database and last product state identification; and to apply Petri net analysis to automatically update the last product state registry during reader data capture. RFID reader data capture can be viewed, in Petri nets, as a direct analysis of locality for a specific transition that holds in a specific workflow. Following this direction, RFID readers storage Petri net control vector list related to each tag id is expected to be perceived. This paper presents PNRD cornerstones and a PNRD implementation example in software called DEMIS Distributed Environment in Manufacturing Information Systems.
Resumo:
Agent-oriented cooperation techniques and standardized electronic healthcare record exchange protocols can be used to combine information regarding different facets of a therapy received by a patient from different healthcare providers at different locations. Provenance is an innovative approach to trace events in complex distributed processes, dependencies between such events, and associated decisions by human actors. We focus on three aspects of provenance in agent-mediated healthcare systems: first, we define the provenance concept and show how it can be applied to agent-mediated healthcare applications; second, we investigate and provide a method for independent and autonomous healthcare agents to document the processes they are involved in without directly interacting with each other; and third, we show that this method solves the privacy issues of provenance in agent-mediated healthcare systems.
Resumo:
With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.
Resumo:
The objective of this study was to elucidate population fluctuations of spider and ant species in forest fragments and adjacent soybean and corn crops under no-tillage and conventional tillage systems, and their correlations with meteorological factors. From Nov 2004 to Apr 2007 sampling of these arthropods at Guaira, São Paulo state was done biweekly during the cropping season and monthly during the periods between crops. To obtain samples at each experimental site, pitfall traps were distributed in 2 transects of 200 m of which 100 m was in the crop, and 100 m was in the forest fragment. Temperature and rainfall were found to have major impacts on fluctuations in population densities of ants of the genus, Pheidole, in soybean and corn crops both grown with conventional tillage and no tillage systems.
Resumo:
O objetivo deste trabalho foi caracterizar a variabilidade espacial da densidade do solo (Ds), teor de água no solo (θ) e porosidade total (Pt) em dois sistemas de manejo da colheita da cana-de-açúcar, com queima e sem queima, em um Latossolo Vermelho, na camada de 0-0,20 m. A área de estudo está localizada no município de Rio Brilhante-MS, na Usina Eldorado. A parcela de cada talhão apresentou malha com comprimento de 180 m e largura de 145,6 m, perfazendo 90 pontos distribuídos na forma de uma grade de nove colunas por dez linhas, com pontos distanciados 20 m de seu vizinho. Foram coletadas amostras de solo na camada de 0-0,20 m, nos anos agrícolas de 2007/2008 e 2008/2009. O sistema de colheita com queima apresentou maior densidade em relação ao mecanizado, nos dois períodos de análise. O teor de água no solo, assim como a porosidade, teve aumento proporcional com relação à diminuição da densidade do sistema de colheita com queima para com o mecanizado.
Resumo:
This paper aims to present, using a set of guidelines, how to apply the conservative distributed simulation paradigm (CMB protocol) to develop efficient applications. Using these guidelines, even a user with little experience on distributed simulation and computer architecture can have good performance on distributed simulations using conservative synchronization protocols for parallel processes.The set of guidelines is focus on a specific application domain, the performance evaluation of computer systems, considering models with coarse granularity and few logical processes and running over two platforms: parallel (high performance communication environment) and distributed (low performance communication environment).
Resumo:
To simplify computer management, various administration systems based on wired connections adopt advanced techniques to manage software configuration. Nevertheless, the strong relation between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. All these issues lead to degradation of scalability, flexibility and the facility to install and maintain distributed applications. This article presents an environment for centralized wireless communication network management, named WSE-OS (Wireless Sharing Environment - Operating Systems): a model based on Virtual Desktop Infrastructure (VDI) which associates virtualization techniques and safe remote access systems to create a distributed architecture as a base for a managing system. WSE-OS is capable of accomplishing the replication of operating system images using wireless communication network, besides offering abstraction of hardware to its clients, making the management more flexible and independent of wired connections. Results obtained from this work indicate that WSE-OS allows disseminating, through a single software configuration, the execution of data related to operating system images in client computers. WSE-OS can also be used as a management tool for operating systems in a wireless network.
Resumo:
This paper presents an interior point method for the long-term generation scheduling of large-scale hydrothermal systems. The problem is formulated as a nonlinear programming one due to the nonlinear representation of hydropower production and thermal fuel cost functions. Sparsity exploitation techniques and an heuristic procedure for computing the interior point method search directions have been developed. Numerical tests in case studies with systems of different dimensions and inflow scenarios have been carried out in order to evaluate the proposed method. Three systems were tested, with the largest being the Brazilian hydropower system with 74 hydro plants distributed in several cascades. Results show that the proposed method is an efficient and robust tool for solving the long-term generation scheduling problem.