982 resultados para Elastically restrained edges


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article offers a critical conceptual discussion and refinement of Chomsky’s (2000, 2001, 2007, 2008) phase system, addressing many of the problematic aspects highlighted in the critique of Boeckx & Grohmann (2007) and seeking to resolve these issues, in particular the stipulative and arbitrary properties of phases and phase edges encoded in the (various versions of the) Phase Impenetrability Condition (PIC). Chomsky’s (2000) original conception of phases as lexical subarrays is demonstrated to derive these properties straightforwardly once a single assumption about the pairwise composition of phases is made, and the PIC is reduced to its necessary core under the Strong Minimalist Thesis (SMT)—namely, the provision of an edge. Finally, a comparison is undertaken of the lexical-subarray conception of phases with the feature-inheritance system of Chomsky 2007, 2008, in which phases are simply the locus of uninterpretable features (probes). Both conceptions are argued to conform to the SMT, and both converge on a pairwise composition of phases. However, the two conceptions of phases are argued to be mutually incompatible in numerous fundamental ways, with no current prospect of unification. The lexical-subarray conception of phases is then to be preferred on grounds of greater empirical adequacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The spontaneous oxidation of CO adsorbates on a Pt electrode modified by Ru under open circuit (OC) conditions in perchloric acid solution has been followed, for the first time, using in situ FTIR spectroscopy, and the dynamics of the surface processes taking place have been elucidated. The IR data show that adsorbed CO present on both the Ru and Pt domains and can be oxidized by the oxygen-containing adlayer on the Ru in a chemical process to produce CO under OC conditions. There is a free exchange of CO is between the Ru and Pt sites. Oxidation of CO may take place at the edges of the Ru islands, but CO is transfer, at least on the time scale of these experiments, allows the two different populations to maintain equilibrium. Oxidation is limited in this region by the rate of supply of oxygen to die surface of the catalyst. A mechanism is postulated to explain the observed behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: There have been few histological or ultrastructural studies of the outer retina and choriocapillaris following panretinal photocoagulation therapy. This investigation examines the long-term morphological effects of panretinal photocoagulation in two patients with type II diabetes who had received laser treatment more than 6 months prior to death.

METHODS: Regions of retina and choroid from each patient were fixed in 2.5% glutaraldehyde, dissected out and examined using light microscopy and scanning and transmission electron microscopy.

RESULTS: After removing the neural retina, scanning electron microscopy of non-photocoagulated areas of the eye cups revealed normal cobblestone-like retinal pigment epithelial (RPE) cells. Regions with laser scars showed little RPE infiltration into the scar area, although large rounded cells often appeared in isolation within these areas. Sections of the retina and choroid in burn regions showed a complete absence of the outer nuclear layer and photoreceptor cells, with the inner retinal layers lying in close apposition to Bruch's membrane. Non-photocoagulated regions of the retina and choroid appeared normal in terms of both cell number and cell distribution. The RPE layer was absent within burn scars but many RPE-like cells appeared markedly hypertrophic at the edges of these regions. Bruch's membrane always remained intact, although the underlying choriocapillaris was clearly disrupted at the point of photocoagulation burns, appearing largely fibrosed and non-perfused. Occasional choroidal capillaries occurring in this region were typically small in profile and had plump non-fenestrated endothelium.

CONCLUSIONS: This study outlines retinal and choroidal cell responses to panretinal photocoagulation in diabetic patients and demonstrates an apparent reduction in the capacity of these tissues to repair laser damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Healing algorithms play a crucial part in distributed peer-to-peer networks where failures occur continuously and frequently. Whereas there are approaches for robustness that rely largely on built-in redundancy, we adopt a responsive approach that is more akin to that of biological networks e.g. the brain. The general goal of self-healing distributed graphs is to maintain certain network properties while recovering from failure quickly and making bounded alterations locally. Several self-healing algorithms have been suggested in the recent literature [IPDPS'08, PODC'08, PODC'09, PODC'11]; they heal various network properties while fulfilling competing requirements such as having low degree increase while maintaining connectivity, expansion and low stretch of the network. In this work, we augment the previous algorithms by adding the notion of edge-preserving self-healing which requires the healing algorithm to not delete any edges originally present or adversarialy inserted. This reflects the cost of adding additional edges but more importantly it immediately follows that edge preservation helps maintain any subgraph induced property that is monotonic, in particular important properties such as graph and subgraph densities. Density is an important network property and in certain distributed networks, maintaining it preserves high connectivity among certain subgraphs and backbones. We introduce a general model of self-healing, and introduce xheal+, an edge-preserving version of xheal[PODC'11]. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:



We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.

We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology- based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-world graphs or networks tend to exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Much effort has been directed into creating realistic and tractable models for unlabelled graphs, which has yielded insights into graph structure and evolution. Recently, attention has moved to creating models for labelled graphs: many real-world graphs are labelled with both discrete and numeric attributes. In this paper, we present AGWAN (Attribute Graphs: Weighted and Numeric), a generative model for random graphs with discrete labels and weighted edges. The model is easily generalised to edges labelled with an arbitrary number of numeric attributes. We include algorithms for fitting the parameters of the AGWAN model to real-world graphs and for generating random graphs from the model. Using the Enron “who communicates with whom” social graph, we compare our approach to state-of-the-art random labelled graph generators and draw conclusions about the contribution of discrete vertex labels and edge weights to the structure of real-world graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern networks are large, highly complex and dynamic. Add to that the mobility of the agents comprising many of these networks. It is difficult or even impossible for such systems to be managed centrally in an efficient manner. It is imperative for such systems to attain a degree of self-management. Self-healing i.e. the capability of a system in a good state to recover to another good state in face of an attack, is desirable for such systems. In this paper, we discuss the self-healing model for dynamic reconfigurable systems. In this model, an omniscient adversary inserts or deletes nodes from a network and the algorithm responds by adding a limited number of edges in order to maintain invariants of the network. We look at some of the results in this model and argue for their applicability and further extensions of the results and the model. We also look at some of the techniques we have used in our earlier work, in particular, we look at the idea of maintaining virtual graphs mapped over the existing network and assert that this may be a useful technique to use in many problem domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In distributed networks, some groups of nodes may have more inter-connections, perhaps due to their larger bandwidth availability or communication requirements. In many scenarios, it may be useful for the nodes to know if they form part of a dense subgraph, e.g., such a dense subgraph could form a high bandwidth backbone for the network. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs (subgraphs that the member nodes are aware of). The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model in the sense that only messages of O(log n) size are permitted, where n is the number of nodes in the network. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that approximate both the densest subgraph, i.e., the subgraph of the highest density in the network, and the at-least-k-densest subgraph (for a given parameter k), i.e., the densest subgraph of size at least k. We give a (2 + e)-approximation algorithm for the densest subgraph problem. The at-least-k-densest subgraph is known to be NP-hard for the general case in the centralized setting and the best known algorithm gives a 2-approximation. We present an algorithm that maintains a (3+e)-approximation in our distributed, dynamic setting. Our algorithms run in O(Dlog n) time. © 2012 Authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims to investigate drilling process in carbon-fiber reinforced plastic (CFRP) composites with multilayer TiAlN/TiN PVD-coated tungsten carbide drill. The effect of process parameters have been investigated in drilling of Hexcel M21-T700GC. Thrust force and torque were measured online throughout the drilling experiments. Delamination were observed using optical microscope and analyzed via a developed algorithm based on digital image processing technique. Surface roughness of each hole was measured using a surface profilometer. In addition, the progression of tool wear in various surfaces of drill was observed using tool microscope and measured using image software. Our results indicate that the thrust force and torque increased with the increasing cutting speed and feed rate. Delamination and average surface roughness that rose with the increase in feed rate, however, decreased with the increasing cutting speed. The average surface roughness tended to increase with the increase in feed rate and decrease with the increasing cutting speed in drilling of carbon-fiber reinforced plastic (CFRP). Feed rate was found as the predominant factor on the drilling outputs. Abrasive wear was observed on both flank and relief surfaces, which created edge wear on cutting edges. No sign of chipping or plastic deformation has been observed on the surfaces of drills. © 2012 The Author(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the hydrodynamics of a small, seabed mounted, bottom hinged, wave energy converter in shallow water. The Oscillating Wave Surge Converter is a pitching flap-type device which is located in 10-15m of water to take advantage of the amplification of horizontal water particle motion in shallow water. A conceptual model of the hydrodynamics of the device has been formulated and shows that, as the motion of the flap is highly constrained, the magnitude of the force applied to the flap by the wave is strongly linked to the power absorption.

An extensive set of experiments has been carried out in the wave tank at Queen’s University at both 40th and 20th scales. The experiments have included testing in realistic sea states to estimate device performance as well as fundamental tests using small amplitude monochromatic waves to determine the force applied to the flap by the waves. The results from the physical modelling programme have been used in conjunction with numerical data from WAMIT to validate the conceptual model.

The work finds that tuning the OWSC to the incident wave periods is problematic and only results in a marginal increase in power capture. It is also found that the addition of larger diameter rounds to the edges of the flap reduces viscous losses and has a greater effect on the performance of the device than tuning. As wave force is the primary driver of device performance it is shown that the flap should fill the water column and should pierce the water surface to reduce losses due to wave overtopping.

With the water depth fixed at approximately 10m it is shown that the width of the flap has the greatest impact on the magnitude of wave force, and thus device performance. An 18m wide flap is shown to have twice the absorption efficiency of a 6m wide flap and captures 6 times the power. However, the increase in power capture with device width is not limitless and a 24m wide flap is found to be affected by two-dimensional hydrodynamics which reduces its performance per unit width, especially in sea states with short periods. It is also shown that as the width increases the performance gains associated with the addition of the end effectors reduces. Furthermore, it is shown that as the flap width increases the natural pitching period of the flap increases, thus detuning the flap further from the wave periods of interest for wave energy conversion.

The effect of waves approaching the flap from an oblique angle is also investigated and the power capture is found to decrease with the cosine squared of the encounter angle. The characteristic of the damping applied by the power take off system is found to have a significant effect on the power capture of the device, with constant damping producing between 20% and 30% less power than quadratic damping. Furthermore, it is found that applying a higher level of damping, or a damping bias, to the flap as it pitches towards the beach increases the power capture by 10%.

A further set of experiments has been undertaken in a case study used to predict the power capture of a prototype of the OWSC concept. The device, called the Oyster Demonstrator, has been developed by Aquamarine Power Ltd. and is to be installed at the European Marine Energy Centre, Scotland, in 2009.

The work concludes that OWSC is a viable wave energy converter and absorption efficiencies of up 75% have been measured. It is found that to maximise power absorption the flap should be approximately 20m wide with large diameter rounded edges, having its pivot close to the seabed and its top edge piercing the water surface.