499 resultados para destinations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Empresariais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para a obtenção do grau de Mestre em Ciências da Comunicação, ramo de Marketing e Publicidade

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute studies have led some authors to conclude that the router graph of the Internet is a scale-free graph, or more generally a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. In this paper we argue that the evidence to date for this conclusion is at best insufficient. We show that graphs appearing to have power-law degree distributions can arise surprisingly easily, when sampling graphs whose true degree distribution is not at all like a power-law. For example, given a classical Erdös-Rényi sparse, random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can easily appear to show a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to distinguish measurements taken from the Erdös-Rényi graphs from those taken from power-law random graphs. When we apply this distinction to a number of well-known datasets, we find that the evidence for sampling bias in these datasets is strong.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Animals control contact with surfaces when locomoting, catching prey, etc. This requires sensorily guiding the rate of closure of gaps between effecters such as the hands, feet or jaws and destinations such as a ball, the ground and a prey. Control is generally rapid, reliable and robust, even with small nervous systems: the sensorimotor processes are therefore probably rather simple. We tested a hypothesis, based on general tau theory, that closing two gaps simultaneously, as required in many actions, might be achieved simply by keeping the taus of the gaps coupled in constant ratio. tau of a changing gap is defined as the time-to-closure of the gap at the current closure-rate. General tau theory shows that tau of a gap could, in principle, be directly sensed without needing to sense either the gap size or its rate of closure. In our experiment, subjects moved an effector (computer cursor) to a destination zone indicated on the computer monitor, to stop in the zone just as a moving target cursor reached it. The results indicated the subjects achieved the task by keeping tau of the gap between effector and target coupled to tau of the gap between the effector and the destination zone. Evidence of tau -coupling has also been found, for example, in bats guiding landing using echolocation. Thus, it appears that a sensorimotor process used by different species for coordinating the closure of two or more gaps between effecters and destinations entails constantly sensing the taus of the gaps and moving so as to keep the taus coupled in constant ratio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Call centres have in the last three decades come to define the interaction between corporations, governments, and other institutions and their respective customers, citizens, and members. From telemarketing to tele-health services, to credit card assistance, and even emergency response systems, call centres function as a nexus mediating technologically enabled labour practices with the commodification of services. Because of the ubiquitous nature of the call centre in post-industrial capitalism, the banality of these interactions often overshadows the nature of work and labour in this now-global sector. Advances in telecommunication technologies and the globalization of management practices designed to oversee and maintain standardized labour processes have made call centre work an international phenomenon. Simultaneously, these developments have dislocated assumptions about the geographic and spatial seat of work in what is defined here as the new international division of knowledge labour. The offshoring and outsourcing of call centre employment, part of the larger information technology and information technology enabled services sectors, has become a growing practice amongst governments and corporations in their attempts at controlling costs. Leading offshore destinations for call centre work, such as Canada and India, emerged as prominent locations for call centre work for these reasons. While incredible advances in technology have permitted the use of distant and “offshore” labour forces, the grander reshaping of an international political economy of communications has allowed for the acceleration of these processes. New and established labour unions have responded to these changes in the global regimes of work by seeking to organize call centre workers. These efforts have been assisted by a range of forces, not least of which is the condition of work itself, but also attempts by global union federations to build a bridge between international unionism and local organizing campaigns in the Global South and Global North. Through an examination of trade union interventions in the call centre industries located in Canada and India, this dissertation contributes to research on post-industrial employment by using political economy as a juncture between development studies, critical communications, and labour studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de este trabajo consiste en estudiar la evolución de los destinos turísticos litorales consolidados a partir del análisis comparado entre Balneario Camboriú y Benidorm. Se trata de dos destinos localizados en contextos territoriales y turísticos diferentes, en los que se contrastan de manera empírica los indicadores de evolución de los destinos y se vinculan las dinámicas evolutivas con el modelo territorial-turístico resultante en cada destino. El análisis realizado permite contrastar los postulados de los modelos evolutivos clásicos (Butler, 1980) e incorporar los nuevos planteamientos de la geografía económica evolutiva. La investigación delimita cronológicamente los periodos de desarrollo de ambos destinos para identificar los factores con mayor incidencia en la evolución de los mismos. Una evolución marcada, fundamentalmente, por la ubicación geográfica, la planificación y gestión urbanoturística a diferentes escalas, la dependencia de determinados mercados emisores y la influencia de factores macroeconómicos. Un conjunto de factores interrelacionados que dibujan trayectorias dispares para los destinos analizados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent patterns of migration indicate that international migrants are not confined to urban gateways. Instead many migrants have settled in new destination areas located in rural and small town areas. While this might appear to be a positive phenomenon for rural areas struggling with decline and stagnation, the reality is that many of these areas are ill-equipped to manage the rate and pace of change that has been witnessed in recent years. Migration to established, typically urban areas has been the subject of extensive research. However, little is known about the way in which migrants navigate their way through social structures as they settle into destinations with little experience of immigration. Using empirical research, this article considers the way in which migrants navigate their way through social structures to establish life in a so-called ‘new’ migration destination. It analyses the way in which government and civil society respond to their needs of recent arrivals, showing how both NGO’s and the statutory sector play an important role in this process. It considers the ramifications for these different sectors and the implications for so-called ‘new’ destinations as they become more established or ‘mature’ areas of immigration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we investigate a multiuser cognitive relay network with direct source-destination links and multiple primary destinations. In this network, multiple secondary users compete to communicate with a secondary destination assisted by an amplify-and-forward (AF) relay. We take into account the availability of direct links from the secondary users to the primary and secondary destinations. For the considered system, we select one best secondary user to maximize the received signal-to-noise ratio (SNR) at the secondary destination. We first derive an accurate lower bound of the outage probability, and then provide an asymptotic expression of outage probability in high SNR region. From the lower bound and the asymptotic expressions, we obtain several insights into the system design. Numerical and simulation results are finally demonstrated to verify the proposed studies.