832 resultados para Centralized and Distributed Multi-Agent Routing Schemas
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Staphylococcus aureus is the main agent of infections during peritoneal dialysis (PD). The presence of S. aureus in the nasal cavity has been extensively studied and suggested as a risk factor of dialysis-related infections, whereas coagulase-negative Staphylococcus (CNS) species are frequently considered part of the normal human microbiota. The aim of this study was to identify Staphylococcus in the nasal cavity, pericatheter skin and peritoneal effluent from PD patients, as well as to evaluate the antimicrobial activity evolution in vitro. Thirty-two chronic PD patients were observed during 12 months and had nasal and pericatheter skin samples collected for culture. When peritonitis was detected, samples were also collected from the peritoneal effluent for culture. The activity of several antimicrobial drugs (penicillin G, oxacillin, cephalothin, ofloxacin, netilmicin and vancomycin) against different Staphylococcus species was measured by using the agar drug diffusion assay (Kirby-Bauer method). Staphylococcus was separated into S. aureus, S. epidermidis and other CNS species in order to determine the in vitro resistance level. S. epidermidis resistance to oxacillin progressively increased during the study period (p < 0.05). Resistance to ofloxacin was inexpressive, whereas resistance to netilmicin and vancomycin was not detected. of the oxacillin-resistant species (n = 74), 83% were S. epidermidis, 13% other CNS and 4% S. aureus (p < 0.05). Regarding multidrug resistant strains (n = 45), 82% were S. epidermidis, 13% other CNS, and 5% S. aureus (p < 0.05). This study shows the relevance of resistance to oxacillin and CNS multi-drug resistance, particularly concerning S. epidermidis, in PD patients.
Resumo:
STATEMENT OF PROBLEM: Despite careful procedures, master stone dies may be damaged during laboratory procedures. The dentist routinely adjusts castings because the marginal fit of casting is not as accurate as on the dies. PURPOSE: This study evaluated the technique of internal adjustment of castings with use of duplicated stone dies and a disclosing agent to improve marginal fit discrepancy. MATERIAL AND METHODS: Thirty-two nickel-chromium copings were fabricated and simulated standard clinical and laboratory procedures with 2 variables: tooth preparation convergence angles of 6 and 18 degrees, with or without internal relief. Master stone dies and their duplicates were selected for coping construction and internal adjustment, respectively. A specimen positioning device was coupled with a Toolmakers microscope to allow reproducibility of measurements. Each coping was evaluated at 8 locations of its marginal perimeter, before and after internal adjustment. RESULTS: Marginal fit discrepancy of copings were significantly reduced with an internal adjustment technique (mean > 52%) for all experimental groups. Tooth preparations with greater convergence and internally relieved castings recorded a better marginal fit. CONCLUSION: The casting internal adjustment technique with use of duplicated stone dies and a disclosing agent substantially reduced marginal fit discrepancy.
Resumo:
In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.
Resumo:
Supervising and controlling the many processes involved in petroleum production is both dangerous and complex. Herein, we propose a multiagent supervisory and control system for handle continuous processes like those in chemical and petroleum industries In its architeture, there are agents responsible for managing data production and analysis, and also the production equipments. Fuzzy controllers were used as control agents. The application of a fuzzy control system to managing an off-shore installation for petroleum production onto a submarine separation process is described. © 2008 IEEE.
Resumo:
The multi-relational Data Mining approach has emerged as alternative to the analysis of structured data, such as relational databases. Unlike traditional algorithms, the multi-relational proposals allow mining directly multiple tables, avoiding the costly join operations. In this paper, is presented a comparative study involving the traditional Patricia Mine algorithm and its corresponding multi-relational proposed, MR-Radix in order to evaluate the performance of two approaches for mining association rules are used for relational databases. This study presents two original contributions: the proposition of an algorithm multi-relational MR-Radix, which is efficient for use in relational databases, both in terms of execution time and in relation to memory usage and the presentation of the empirical approach multirelational advantage in performance over several tables, which avoids the costly join operations from multiple tables. © 2011 IEEE.
Resumo:
Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, a cross-layer solution for packet size optimization in wireless sensor networks (WSN) is introduced such that the effects of multi-hop routing, the broadcast nature of the physical wireless channel, and the effects of error control techniques are captured. A key result of this paper is that contrary to the conventional wireless networks, in wireless sensor networks, longer packets reduce the collision probability. Consequently, an optimization solution is formalized by using three different objective functions, i.e., packet throughput, energy consumption, and resource utilization. Furthermore, the effects of end-to-end latency and reliability constraints are investigated that may be required by a particular application. As a result, a generic, cross-layer optimization framework is developed to determine the optimal packet size in WSN. This framework is further extended to determine the optimal packet size in underwater and underground sensor networks. From this framework, the optimal packet sizes under various network parameters are determined.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.
Resumo:
Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.