937 resultados para Locality-aware
Resumo:
In Peer-to-Peer (P2P) networks, it is often desirable to assign node IDs which preserve locality relationships in the underlying topology. Node locality can be embedded into node IDs by utilizing a one dimensional mapping by a Hilbert space filling curve on a vector of network distances from each node to a subset of reference landmark nodes within the network. However this approach is fundamentally limited because while robustness and accuracy might be expected to improve with the number of landmarks, the effectiveness of 1 dimensional Hilbert Curve mapping falls for the curse of dimensionality. This work proposes an approach to solve this issue using Landmark Multidimensional Scaling (LMDS) to reduce a large set of landmarks to a smaller set of virtual landmarks. This smaller set of landmarks has been postulated to represent the intrinsic dimensionality of the network space and therefore a space filling curve applied to these virtual landmarks is expected to produce a better mapping of the node ID space. The proposed approach, the Virtual Landmarks Hilbert Curve (VLHC), is particularly suitable for decentralised systems like P2P networks. In the experimental simulations the effectiveness of the methods is measured by means of the locality preservation derived from node IDs in terms of latency to nearest neighbours. A variety of realistic network topologies are simulated and this work provides strong evidence to suggest that VLHC performs better than either Hilbert Curves or LMDS use independently of each other.
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Twisted quantum field theories on the Groenewold-Moyal plane are known to be nonlocal. Despite this nonlocality, it is possible to define a generalized notion of causality. We show that interacting quantum field theories that involve only couplings between matter fields, or between matter fields and minimally coupled U(1) gauge fields are causal in this sense. On the other hand, interactions between matter fields and non-Abelian gauge fields violate this generalized causality. We derive the modified Feynman rules emergent from these features. They imply that interactions of matter with non-Abelian gauge fields are not Lorentz- and CPT-invariant.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
Historically, business process design has been driven by business objectives, specifically process improvement. However this cannot come at the price of control objectives which stem from various legislative, standard and business partnership sources. Ensuring the compliance to regulations and industrial standards is an increasingly important issue in the design of business processes. In this paper, we advocate that control objectives should be addressed at an early stage, i.e., design time, so as to minimize the problems of runtime compliance checking and consequent violations and penalties. To this aim, we propose supporting mechanisms for business process designers. This paper specifically presents a support method which allows the process designer to quantitatively measure the compliance degree of a given process model against a set of control objectives. This will allow process designers to comparatively assess the compliance degree of their design as well as be better informed on the cost of non-compliance.
Resumo:
From September 2008 to March 2010, 397 ticks (315 larvae, 33 nymphs, 23 females, and 26 males) were collected from captive and free-living wildlife species in northeastern Brazil. Six tick species were identified, including Amblyomma auricularium (Conil) on Tamandua tetradactyla (L.),Amblyomma dubitalum Neumann on Hydrochaeris hydrochaeris (L.), Nectomys rattus (Pelzen) and T. tetradactyla, Amblyomma parvim A ragao on T. tatradactyla, Amblyomma rotundatum Koch on Boa constrictor L., Chelonoidis carbonaria (Spix), Kinosternon scorpioides (L.) and Rhinella jimi (Stevaux), Amblyomma cerium Koch on Bradypus variegatus Schinz, and Rhipicephalus sanguineus (Latreille) on Lycalopex vetulus (Lund). Nectomys rattus and T. tetradactyla are new hosts for A. dubitatum This study extends the known distribution of A. dubitatum in South America and provides evidence that its geographical range has been underestimated because of the lack of research. Four (A. dubitatum, A. parvum, A. rotundatum, and R. sanguineus) of six tick species identified in this study have previously been found on humans in South America, some of them being potentially involved in the transmission of pathogens of zoonotic concern.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Human Computer Interaction (HCl) is to interaction between computers and each person. And context-aware (CA) is very important one of HCI composition. In particular, if there are sequential or continuous tasks between users and devices, among users, and among devices etc, it is important to decide the next action using right CA. And to take perfect decision we have to get together all CA into a structure. We define that structure is Context-Aware Matrix (CAM) in this article. However to make exact decision is too hard for some problems like low accuracy, overhead and bad context by attacker etc. Many researcher has been studying to solve these problems. Moreover, still it has weak point HCI using in safety. In this Article, we propose CAM making include best selecting Server in each area. As a result, moving users could be taken the best way.
Resumo:
Involving groups in important management processes such as decision making has several advantages. By discussing and combining ideas, counter ideas, critical opinions, identified constraints, and alternatives, a group of individuals can test potentially better solutions, sometimes in the form of new products, services, and plans. In the past few decades, operations research, AI, and computer science have had tremendous success creating software systems that can achieve optimal solutions, even for complex problems. The only drawback is that people don’t always agree with these solutions. Sometimes this dissatisfaction is due to an incorrect parameterization of the problem. Nevertheless, the reasons people don’t like a solution might not be quantifiable, because those reasons are often based on aspects such as emotion, mood, and personality. At the same time, monolithic individual decisionsupport systems centered on optimizing solutions are being replaced by collaborative systems and group decision-support systems (GDSSs) that focus more on establishing connections between people in organizations. These systems follow a kind of social paradigm. Combining both optimization- and socialcentered approaches is a topic of current research. However, even if such a hybrid approach can be developed, it will still miss an essential point: the emotional nature of group participants in decision-making tasks. We’ve developed a context-aware emotion based model to design intelligent agents for group decision-making processes. To evaluate this model, we’ve incorporated it in an agent-based simulator called ABS4GD (Agent-Based Simulation for Group Decision), which we developed. This multiagent simulator considers emotion- and argument based factors while supporting group decision-making processes. Experiments show that agents endowed with emotional awareness achieve agreements more quickly than those without such awareness. Hence, participant agents that integrate emotional factors in their judgments can be more successful because, in exchanging arguments with other agents, they consider the emotional nature of group decision making.
Resumo:
The exhibition of information does not always attend to the preferences and characteristics of the users, nor the context that involves the user. With the aim of overcoming this gap, we propose an emotional context-aware model for adapting information contents to users and groups. The proposed model is based on OCC and Big Five models to handle emotion and personality respectively. The idea is to adapt the representation of the information in order to maximize the positive emotional valences and minimize the negatives. To evaluate the proposed model it was developed a prototype for adapting RSS news to users and group of users.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
Actualmente, os smartphones e outros dispositivos móveis têm vindo a ser dotados com cada vez maior poder computacional, sendo capazes de executar um vasto conjunto de aplicações desde simples programas de para tirar notas até sofisticados programas de navegação. Porém, mesmo com a evolução do seu hardware, os actuais dispositivos móveis ainda não possuem as mesmas capacidades que os computadores de mesa ou portáteis. Uma possível solução para este problema é distribuir a aplicação, executando partes dela no dispositivo local e o resto em outros dispositivos ligados à rede. Adicionalmente, alguns tipos de aplicações como aplicações multimédia, jogos electrónicos ou aplicações de ambiente imersivos possuem requisitos em termos de Qualidade de Serviço, particularmente de tempo real. Ao longo desta tese é proposto um sistema de execução de código remota para sistemas distribuídos com restrições de tempo-real. A arquitectura proposta adapta-se a sistemas que necessitem de executar periodicamente e em paralelo mesmo conjunto de funções com garantias de tempo real, mesmo desconhecendo os tempos de execução das referidas funções. A plataforma proposta foi desenvolvida para sistemas móveis capazes de executar o Sistema Operativo Android.