22 resultados para low degree graph
Resumo:
Healing algorithms play a crucial part in distributed peer-to-peer networks where failures occur continuously and frequently. Whereas there are approaches for robustness that rely largely on built-in redundancy, we adopt a responsive approach that is more akin to that of biological networks e.g. the brain. The general goal of self-healing distributed graphs is to maintain certain network properties while recovering from failure quickly and making bounded alterations locally. Several self-healing algorithms have been suggested in the recent literature [IPDPS'08, PODC'08, PODC'09, PODC'11]; they heal various network properties while fulfilling competing requirements such as having low degree increase while maintaining connectivity, expansion and low stretch of the network. In this work, we augment the previous algorithms by adding the notion of edge-preserving self-healing which requires the healing algorithm to not delete any edges originally present or adversarialy inserted. This reflects the cost of adding additional edges but more importantly it immediately follows that edge preservation helps maintain any subgraph induced property that is monotonic, in particular important properties such as graph and subgraph densities. Density is an important network property and in certain distributed networks, maintaining it preserves high connectivity among certain subgraphs and backbones. We introduce a general model of self-healing, and introduce xheal+, an edge-preserving version of xheal[PODC'11]. © 2012 IEEE.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
The violent merger of two carbon-oxygen white dwarfs has been proposed as a viable progenitor for some Type Ia supernovae. However, it has been argued that the strong ejecta asymmetries produced by this model might be inconsistent with the low degree of polarization typically observed in Type Ia supernova explosions. Here, we test this claim by carrying out a spectropolarimetric analysis for the model proposed by Pakmor et al. for an explosion triggered during the merger of a 1.1 and 0.9 M⊙ carbon-oxygen white dwarf binary system. Owing to the asymmetries of the ejecta, the polarization signal varies significantly with viewing angle. We find that polarization levels for observers in the equatorial plane are modest (≲1 per cent) and show clear evidence for a dominant axis, as a consequence of the ejecta symmetry about the orbital plane. In contrast, orientations out of the plane are associated with higher degrees of polarization and departures from a dominant axis. While the particular model studied here gives a good match to highly polarized events such as SN 2004dt, it has difficulties in reproducing the low polarization levels commonly observed in normal Type Ia supernovae. Specifically, we find that significant asymmetries in the element distribution result in a wealth of strong polarization features that are not observed in the majority of currently available spectropolarimetric data of Type Ia supernovae. Future studies will map out the parameter space of the merger scenario to investigate if alternative models can provide better agreement with observations.
Resumo:
To characterize non-thermal atmospheric pressure plasmas experimentally, a large variety of methods and techniques is available, each having its own specific possibilities and limitations. A rewarding method to investigate these plasma sources is laser Thomson scattering. However, that is challenging. Non-thermal atmospheric pressure plasmas (gas temperatures close to room temperature and electron temperatures of a few eV) have usually small dimensions (below 1 mm) and a low degree of ionization (below 10-4). Here an overview is presented of how Thomson scattering can be applied to such plasmas and used to measure directly spatially and temporally resolved the electron density and energy distribution. A general description of the scattering of photons and the guidelines for an experimental setup of this active diagnostic are provided. Special attention is given to the design concepts required to achieve the maximum signal photon flux with a minimum of unwanted signals. Recent results from the literature are also presented and discussed.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.
Resumo:
The 90° problem of cosmic-ray transport theory is revisited in this paper. By using standard forms of the wave spectrum in the solar wind, the pitch-angle Fokker–Planck coefficient and the parallel mean free path are computed for different resonance functions. A critical comparison is made of the strength of 90° scattering due to plasmawave effects, dynamical turbulence effects and nonlinear effects. It is demonstrated that, only for low-energy cosmic particles, dynamical effects are usually dominant. The novel results presented here are essential for an effective comparison of heliospheric observations for the parallel mean free path with the theoretical model results.
Resumo:
Neutral gas depletion mechanisms are investigated in a dense low-temperature argon plasma-an inductively coupled magnetic neutral loop (NL) discharge. Gas temperatures are deduced from the Doppler profile of the 772.38 nm line absorbed by argon metastable atoms. Electron density and temperature measurements reveal that at pressures below 0.1 Pa, relatively high degrees of ionization (exceeding 1%) result in electron pressures, p(e) = kT(e)n(e), exceeding the neutral gas pressure. In this regime, neutral dynamics has to be taken into account and depletion through comparatively high ionization rates becomes important. This additional depletion mechanism can be spatially separated due to non-uniform electron temperature and density profiles (non-uniform ionization rate), while the gas temperature is rather uniform within the discharge region. Spatial profiles of the depletion of metastable argon atoms in the NL region are observed by laser induced fluorescence spectroscopy. In this region, the depletion of ground state argon atoms is expected to be even more pronounced since in the investigated high electron density regime the ratio of metastable and ground state argon atom densities is governed by the electron temperature, which peaks in the NL region. This neutral gas depletion is attributed to a high ionization rate in the NL zone and fast ion loss through ambipolar diffusion along the magnetic field lines. This is totally different from what is observed at pressures above 10 Pa where the degree of ionization is relatively low (
Resumo:
.In this letter, we demonstrate for the first time that gate misalignment is not a critical limiting factor for low voltage operation in gate-underlap double gate (DG) devices. Our results show that underlap architecture significantly extends the tolerable limit of gate misalignment in 25 nm devices. DG MOSFETs with high degree of gate misalignment and optimal gate-underlap design can perform comparably or even better than self-aligned nonunderlap devices. Results show that spacer-to-straggle (s/sigma) ratio, a key design parameter for underlap devices, should be within the range of 2.3-3.0 to accommodate back gate misalignment. These results are very significant as the stringent process control requirements for achieving self-alignment in nanoscale planar DG MOSFETs are considerably relaxed
Resumo:
In this paper, we analyze the enormous potential of engineering source/drain extension (SDE) regions in FinFETs for ultra-low-voltage (ULV) analog applications. SDE region design can simultaneously improve two key analog figures of merit (FOM)-intrinsic de gain (A(vo)) and cutoff frequency (f(T)) for 60 and 30 nm FinFETs operated at low drive current (J(ds) = 5 mu A/mu m). The improved Avo and fT are nearly twice compared to those of devices with abrupt SDE regions. The influence of the SDE region profile and its impact on analog FOM is extensively analyzed. Results show that SDE region optimization provides an additional degree of freedom apart from device parameters (fin width and aspect ratio) to design future nanoscale analog devices. The results are analyzed in terms of spacer-to-straggle ratio a new design parameter for SDE engineered devices. This paper provides new opportunities for realizing future ULV/low-power analog design with FinFETs.
Resumo:
A new type of direct current, high-density, and low electron temperature reflex plasma source, obtained as a hybrid between a modified hollow-cathode discharge and a Penning ionization gauge discharge is presented. The plasma source was tested in argon, nitrogen, and oxygen over a range pressure of 1.0-10(-3) mbar, discharge currents 20-200 mA, and magnetic field 0-120 Gauss. Both external parameters, such as breakdown potential and the discharge voltage-current characteristic, and its internal parameters, like the electron energy distribution function, electron and ion densities, and electron temperature, were measured. Due to the enhanced hollow-cathode effect by the magnetic trapping of electrons, the density of the bulk plasma is as high as 10(18) m(-3), and the electron temperature is as low as a few tenths of electron volts. The plasma density scales with the dissipated power. Another important feature of this reflex plasma source is its high degree of uniformity, while the discharge bulk region is free of an electric field. (C) 2004 American Institute of Physics.
Resumo:
In this paper, we present a random iterative graph based hyper-heuristic to produce a collection of heuristic sequences to construct solutions of different quality. These heuristic sequences can be seen as dynamic hybridisations of different graph colouring heuristics that construct solutions step by step. Based on these sequences, we statistically analyse the way in which graph colouring heuristics are automatically hybridised. This, to our knowledge, represents a new direction in hyper-heuristic research. It is observed that spending the search effort on hybridising Largest Weighted Degree with Saturation Degree at the early stage of solution construction tends to generate high quality solutions. Based on these observations, an iterative hybrid approach is developed to adaptively hybridise these two graph colouring heuristics at different stages of solution construction. The overall aim here is to automate the heuristic design process, which draws upon an emerging research theme on developing computer methods to design and adapt heuristics automatically. Experimental results on benchmark exam timetabling and graph colouring problems demonstrate the effectiveness and generality of this adaptive hybrid approach compared with previous methods on automatically generating and adapting heuristics. Indeed, we also show that the approach is competitive with the state of the art human produced methods.
Resumo:
We present and analyze an algorithm to measure the structural similarity of generalized trees, a new graph class which includes rooted trees. For this, we represent structural properties of graphs as strings and define the similarity of two Graphs as optimal alignments of the corresponding property stings. We prove that the obtained graph similarity measures are so called Backward similarity measures. From this we find that the time complexity of our algorithm is polynomial and, hence, significantly better than the time complexity of classical graph similarity methods based on isomorphic relations. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
We introduce a novel graph class we call universal hierarchical graphs (UHG) whose topology can be found numerously in problems representing, e.g., temporal, spacial or general process structures of systems. For this graph class we show, that we can naturally assign two probability distributions, for nodes and for edges, which lead us directly to the definition of the entropy and joint entropy and, hence, mutual information establishing an information theory for this graph class. Furthermore, we provide some results under which conditions these constraint probability distributions maximize the corresponding entropy. Also, we demonstrate that these entropic measures can be computed efficiently which is a prerequisite for every large scale practical application and show some numerical examples. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In the present paper we mainly introduce an efficient approach to measure the structural similarity of so called directed universal hierarchical graphs. We want to underline that directed universal hierarchical graphs can be obtained from generalized trees which are already introduced. In order to classify these graphs, we state our novel graph similarity method. As a main result we notice that our novel algorithm has low computational complexity. (c) 2007 Elsevier Inc. All rights reserved.