102 resultados para 1145


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is often suggested that stereotypic behaviour represents a coping response to suboptimal environmental conditions. However, individuals of many species show different coping styles depending on their personality type. Therefore, personality is an important consideration when investigating why only certain individuals become stereotypic under suboptimal conditions. Thus, the aim of this review is to explore the possibility that personality, in particular coping style, may explain why certain individuals are predisposed to stereotypy. We review behavioural and physiological similarities between proactive and stereotypic individuals and suggest that they may in fact be the same phenotype. We also explore how these characteristics might predispose proactive individuals to stereotypy and how this is triggered by the environment. We conclude that personality factors relating to proactivity may mediate whether an animal expresses stereotypic behaviour and that the alternative strategy in such conditions is depression and emotional blunting. We conclude by outlining the animal welfare implications if this hypothesis is correct. © 2013 The Association for the Study of Animal Behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities. © 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:



We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.

We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.

Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.

Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In distributed networks, some groups of nodes may have more inter-connections, perhaps due to their larger bandwidth availability or communication requirements. In many scenarios, it may be useful for the nodes to know if they form part of a dense subgraph, e.g., such a dense subgraph could form a high bandwidth backbone for the network. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs (subgraphs that the member nodes are aware of). The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model in the sense that only messages of O(log n) size are permitted, where n is the number of nodes in the network. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that approximate both the densest subgraph, i.e., the subgraph of the highest density in the network, and the at-least-k-densest subgraph (for a given parameter k), i.e., the densest subgraph of size at least k. We give a (2 + e)-approximation algorithm for the densest subgraph problem. The at-least-k-densest subgraph is known to be NP-hard for the general case in the centralized setting and the best known algorithm gives a 2-approximation. We present an algorithm that maintains a (3+e)-approximation in our distributed, dynamic setting. Our algorithms run in O(Dlog n) time. © 2012 Authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technical market indicators are tools used by technical an- alysts to understand trends in trading markets. Technical (market) indicators are often calculated in real-time, as trading progresses. This paper presents a mathematically- founded framework for calculating technical indicators. Our framework consists of a domain specific language for the un- ambiguous specification of technical indicators, and a run- time system based on Click, for computing the indicators. We argue that our solution enhances the ease of program- ming due to aligning our domain-specific language to the mathematical description of technical indicators, and that it enables executing programs in kernel space for decreased latency, without exposing the system to users’ programming errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context. The VLT-FLAMES Tarantula Survey has an extensive view of the copious number of massive stars in the 30 Doradus (30 Dor) star forming region of the Large Magellanic Cloud. These stars play a crucial role in our understanding of the stellar feedback in more distant, unresolved star forming regions. Aims. The first comprehensive census of hot luminous stars in 30 Dor is compiled within a 10 arcmin (150 pc) radius of its central cluster, R136. We investigate the stellar content and spectroscopic completeness of the early type stars. Estimates were made for both the integrated ionising luminosity and stellar wind luminosity. These values were used to re-assess the star formation rate (SFR) of the region and determine the ionising photon escape fraction. Methods. Stars were selected photometrically and combined with the latest spectral classifications. Spectral types were estimated for stars lacking spectroscopy and corrections were made for binary systems, where possible. Stellar calibrations were applied to obtain their physical parameters and wind properties. Their integrated properties were then compared to global observations from ultraviolet (UV) to far-infrared (FIR) imaging as well as the population synthesis code, Starburst99. Results. Our census identified 1145 candidate hot luminous stars within 150 pc of R136 of which >700 were considered to be genuine early type stars and contribute to feedback. We assess the survey to be spectroscopically complete to 85% in the outer regions (>5 pc) but only 35% complete in the region of the R136 cluster, giving a total of 500 hot luminous stars in the census which had spectroscopy. Only 31 were found to be Wolf-Rayet (W-R) or Of/WN stars, but their contribution to the integrated ionising luminosity and wind luminosity was ~ 40% and ~ 50%, respectively. Similarly, stars with M > 100 M (mostly H-rich WN stars) also showed high contributions to the global feedback, ~ 25% in both cases. Such massive stars are not accounted for by the current Starburst99 code, which was found to underestimate the integrated ionising luminosity of R136 by a factor ~ 2 and the wind luminosity by a factor ~ 9. The census inferred a SFR for 30 Dor of 0.073 ± 0.04 M yr . This was generally higher than that obtained from some popular SFR calibrations but still showed good consistency with the far-UV luminosity tracer as well as the combined Hα and mid-infrared tracer, but only after correcting for Hα extinction. The global ionising output was also found to exceed that measured from the associated gas and dust, suggesting that ~6 % of the ionising photons escape the region. Conclusions. When studying the most luminous star forming regions, it is essential to include their most massive stars if one is to determine a reliable energy budget. Photon leakage becomes more likely after including their large contributions to the ionising output. If 30 Dor is typical of other massive star forming regions, estimates of the SFR will be underpredicted if this escape fraction is not accounted for.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ubiquitous parallel computing aims to make parallel programming accessible to a wide variety of programming areas using deterministic and scale-free programming models built on a task abstraction. However, it remains hard to reconcile these attributes with pipeline parallelism, where the number of pipeline stages is typically hard-coded in the program and defines the degree of parallelism.

This paper introduces hyperqueues, a programming abstraction that enables the construction of deterministic and scale-free pipeline parallel programs. Hyperqueues extend the concept of Cilk++ hyperobjects to provide thread-local views on a shared data structure. While hyperobjects are organized around private local views, hyperqueues require shared concurrent views on the underlying data structure. We define the semantics of hyperqueues and describe their implementation in a work-stealing scheduler. We demonstrate scalable performance on pipeline-parallel PARSEC benchmarks and find that hyperqueues provide comparable or up to 30% better performance than POSIX threads and Intel's Threading Building Blocks. The latter are highly tuned to the number of available processing cores, while programs using hyperqueues are scale-free.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we have developed a low-complexity algorithm for epileptic seizure detection with a high degree of accuracy. The algorithm has been designed to be feasibly implementable as battery-powered low-power implantable epileptic seizure detection system or epilepsy prosthesis. This is achieved by utilizing design optimization techniques at different levels of abstraction. Particularly, user-specific critical parameters are identified at the algorithmic level and are explicitly used along with multiplier-less implementations at the architecture level. The system has been tested on neural data obtained from in-vivo animal recordings and has been implemented in 90nm bulk-Si technology. The results show up to 90 % savings in power as compared to prevalent wavelet based seizure detection technique while achieving 97% average detection rate. Copyright 2010 ACM.