860 resultados para Destinations consolidated


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Una de las cuencas hidrográficas más importante de la Península es la del río Tajo, por su extensión y por su caudal. Se trata de una fosa tectónica calificable de modélica. Dos moles montañosas, el Sistema central y los Montes de Toledo en sentido amplio, la flanquean al Norte y al Sur. La dovela hundida, formada por idénticos materiales que las Sierras, granitos y gneis, alcanza una gran profundidad. Al Este el Sistema Ibérico castellano, principalmente calizo y mesozoico, cierra Castilla y la cuenca, viniendo a dar vida con el agua de sus nieves a un Tajo niño’. El inicio de su Historia Geológica podemos situarlo en el Paleozoico, tiempo geológico durante el cual los territorios donde hoy se sitúa la Meseta estaban formando grandes cordilleras producto de la Orogenia Herciniana. La última etapa de la formación de los relieves actuales de la cuenca la encontramos en la reactivación de los antiguos macizos arrasados. Se inicia con los materiales de la raña y sus equivalentes en el centro de la Cuenca o Fosa del Tajo, y se caracteriza por una progresiva individualización de los procesos, pasándose de las grandes superficies generalizadas en macizos y cuencas, Sierras y Fosa del Tajo, a las pequeñas llanuras en franja u orla, que quedan localizadas en cada cuenca fluvial a medida que éstas se van consolidando por jerarquización, y partir de un río generatriz o emisario principal, el Tajo. La tectónica, procesos posteriores de captura, reajustes climáticos..., no permiten aún determinar cuál fue el orden de jerarquía en los ríos que hoy conocemos; no obstante, puede aventurarse que Jarama-Henares, Perales-Alberche y Guadarrama serían los primeros y Manzanares, Guadalix, Tajuña, los siguientes, y así sucesivamente. La síntesis de la realidad geológica, litológica y climática va a coadyuvar, frenando o favoreciendo, el desarrollo y la diferenciación entre los paisajes vegetales de las zonas montañosas y los de las depresiones terciarias y penillanuras paleozoicas, en un territorio marcado por el predominio del clima mediterráneo continentalizado, con matices de montaña y áreas de influencia atlántica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Animals control contact with surfaces when locomoting, catching prey, etc. This requires sensorily guiding the rate of closure of gaps between effecters such as the hands, feet or jaws and destinations such as a ball, the ground and a prey. Control is generally rapid, reliable and robust, even with small nervous systems: the sensorimotor processes are therefore probably rather simple. We tested a hypothesis, based on general tau theory, that closing two gaps simultaneously, as required in many actions, might be achieved simply by keeping the taus of the gaps coupled in constant ratio. tau of a changing gap is defined as the time-to-closure of the gap at the current closure-rate. General tau theory shows that tau of a gap could, in principle, be directly sensed without needing to sense either the gap size or its rate of closure. In our experiment, subjects moved an effector (computer cursor) to a destination zone indicated on the computer monitor, to stop in the zone just as a moving target cursor reached it. The results indicated the subjects achieved the task by keeping tau of the gap between effector and target coupled to tau of the gap between the effector and the destination zone. Evidence of tau -coupling has also been found, for example, in bats guiding landing using echolocation. Thus, it appears that a sensorimotor process used by different species for coordinating the closure of two or more gaps between effecters and destinations entails constantly sensing the taus of the gaps and moving so as to keep the taus coupled in constant ratio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Call centres have in the last three decades come to define the interaction between corporations, governments, and other institutions and their respective customers, citizens, and members. From telemarketing to tele-health services, to credit card assistance, and even emergency response systems, call centres function as a nexus mediating technologically enabled labour practices with the commodification of services. Because of the ubiquitous nature of the call centre in post-industrial capitalism, the banality of these interactions often overshadows the nature of work and labour in this now-global sector. Advances in telecommunication technologies and the globalization of management practices designed to oversee and maintain standardized labour processes have made call centre work an international phenomenon. Simultaneously, these developments have dislocated assumptions about the geographic and spatial seat of work in what is defined here as the new international division of knowledge labour. The offshoring and outsourcing of call centre employment, part of the larger information technology and information technology enabled services sectors, has become a growing practice amongst governments and corporations in their attempts at controlling costs. Leading offshore destinations for call centre work, such as Canada and India, emerged as prominent locations for call centre work for these reasons. While incredible advances in technology have permitted the use of distant and “offshore” labour forces, the grander reshaping of an international political economy of communications has allowed for the acceleration of these processes. New and established labour unions have responded to these changes in the global regimes of work by seeking to organize call centre workers. These efforts have been assisted by a range of forces, not least of which is the condition of work itself, but also attempts by global union federations to build a bridge between international unionism and local organizing campaigns in the Global South and Global North. Through an examination of trade union interventions in the call centre industries located in Canada and India, this dissertation contributes to research on post-industrial employment by using political economy as a juncture between development studies, critical communications, and labour studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Critics have observed that in early Stuart England, the broad, socially significant concept of melancholy was recoded as a specifically medical phenomenon—a disease rather than a fashion. This recoding made melancholy seem less a social attitude than a private ailment. However, I argue that at the Stuart universities, this recoded melancholy became a covert expression of the disillusionment, disappointment, and frustration produced by pressures there—the overcrowding and competition which left many men “disappointed” in preferment, alongside James I’s unprecedented royal involvement in the universities. My argument has implications for Jürgen Habermas’s account of the emergence of the public sphere, which he claims did not occur until the eighteenth-century. I argue that although the university was increasingly subordinated to the crown’s authority, a lingering sense of autonomy persisted there, a residue of the medieval university’s relative autonomy from the crown; politicized by the encroaching Stuart presence, an alienated community at the university formed a kind of public in private from authority within that authority’s midst. The audience for the printed book, a sphere apart from court or university, represented a forum in which the publicity at the universities could be consolidated, especially in seemingly “private” literary forms such as the treatise on melancholy. I argue that Robert Burton’s exaggerated performance of melancholy in The Anatomy of Melancholy, which gains him license to say almost anything, resembles the performed melancholy that the student-prince Hamlet uses to frustrate his uncle’s attempts to surveil him. After tracing melancholy’s evolving literary function through Hamlet, I go on to discuss James’s interventions into the universities. I conclude by considering two printed (and widely circulated) books by university men: the aforementioned The Anatomy of Melancholy by Burton, an Oxford cleric, and The Temple by George Herbert, who left a career as Cambridge’s public orator to become a country parson. I examine how each of these books uses the affective pattern of courtly-scholarly disappointment—transumed by Burton as melancholy, and by Herbert as holy affliction—to develop an empathic form of publicity among its readership which is in tacit opposition to the Stuart court.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fundamental controls on the initiation and development of gravel-dominated deposits (beaches and barriers) on paraglacial coasts are particle size and shape, sediment supply, storm wave activity (primarily runup), relative sea-level (RSL) change, and terrestrial basement structure (primarily as it affects accommodation space). This paper examines the stochastic basis for barrier organisation as shown by variation in gravel barrier architecture. We recognise punctuated self-organisation of barrier development that is disrupted by short phases of barrier instability. The latter results from positive feedback causing barrier breakdown when sediment supply is exhausted. We examine published typologies for gravel barriers and advocate a consolidated perspective using rate of RSL change and sediment supply. We also consider the temporal variation in controls on barrier development. These are examined in terms of a simple behavioural model (BARCH) for prograding gravel barrier architecture and its sensitivity to such controls. The nature of macroscale (102–103 years) gravel barrier development, including inherited characteristics that influence barrier genesis, as well as forcing from changing RSL, sediment supply, headland control and barrier inertia, is examined in the context of long-surviving barriers along the southern England coastline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vibrated stone column technique is an economical and environmentally friendly process that treats weak ground to enable it to withstand low to moderate loading conditions. The performance of the treated ground depends on various parameters such as the strengths of the in-situ and backfill materials, and the spacing, length and diameter of the columns. In practice, vibrated stone columns are frequently used for settlement control. Studies have shown that columns can fail by bulging, bending, punching or shearing. These failure mechanisms are examined in this paper. The study involved a series of laboratory model tests on a consolidated clay bed. The tests were carried out using two different materials: (a) transparent material with ‘clay like’ properties, and (b) speswhite kaolin. The tests on the transparent material have, probably for the first time, permitted visual examination of deforming granular columns during loading. They have shown that bulging was significant in long columns, whereas punching was prominent in shorter columns. The presence of the columns also greatly improved the load-carrying capacity of the soft clay bed. However, columns longer than about six times their diameter did not lead to further increases in the load-carrying capacity. This suggests that there is an optimum column length for a given arrangement of stone columns beneath a rigid footing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 37-m thick layer of stratified clay encountered during a site investigation at Swann's Bridge, near the sea-coast at Limavady, Northern Ireland, is one of the deepest and thickest layers of this type of material recorded in Ireland. A study of the relevant literature and stratigraphic evidence obtained from the site investigation showed that despite being close to the current shoreline, the clay was deposited in a fresh-water glacial lake formed approximately 13 000 BP. The 37-m layer of clay can be divided into two separate zones. The lower zone was deposited as a series of laminated layers of sand, silt, and clay, whereas the upper zone was deposited as a largely homogeneous mixture. A comprehensive series of tests was carried out on carefully selected samples from the full thickness of the deposit. The results obtained from these tests were complex and confusing, particularly the results of tests done on samples from the lower zone. The results of one-dimensional compression tests, unconsolidated undrained triaxial tests, and consolidated undrained triaxial compression tests showed that despite careful sampling, all of the specimens from the lower zone exhibited behaviour similar to that of reconstituted clays. It was immediately clear that the results needed explanation. This paper studies possible causes of the results from tests carried out on the lower Limavady clay. It suggests a possible mechanism based on anisotropic elasticity, yielding, and destructuring that provides an understanding of the observed behaviour.Key words: clay, laminations, disturbance, yielding, destructuring, reconstituted.