12 resultados para Destination branding
em Boston University Digital Common
Resumo:
Network traffic arises from the superposition of Origin-Destination (OD) flows. Hence, a thorough understanding of OD flows is essential for modeling network traffic, and for addressing a wide variety of problems including traffic engineering, traffic matrix estimation, capacity planning, forecasting and anomaly detection. However, to date, OD flows have not been closely studied, and there is very little known about their properties. We present the first analysis of complete sets of OD flow timeseries, taken from two different backbone networks (Abilene and Sprint-Europe). Using Principal Component Analysis (PCA), we find that the set of OD flows has small intrinsic dimension. In fact, even in a network with over a hundred OD flows, these flows can be accurately modeled in time using a small number (10 or less) of independent components or dimensions. We also show how to use PCA to systematically decompose the structure of OD flow timeseries into three main constituents: common periodic trends, short-lived bursts, and noise. We provide insight into how the various constituents contribute to the overall structure of OD flows and explore the extent to which this decomposition varies over time.
Resumo:
In a recent paper, Structural Analysis of Network Traffic Flows, we analyzed the set of Origin Destination traffic flows from the Sprint-Europe and Abilene backbone networks. This report presents the complete set of results from analyzing data from both networks. The results in this report are specific to the Sprint-1 and Abilene datasets studied in the above paper. The following results are presented here: 1 Rows of Principal Matrix (V) 2 1.1 Sprint-1 Dataset ................................ 2 1.2 Abilene Dataset.................................. 9 2 Set of Eigenflows 14 2.1 Sprint-1 Dataset.................................. 14 2.2 Abilene Dataset................................... 21 3 Classifying Eigenflows 26 3.1 Sprint-1 Dataset.................................. 26 3.2 Abilene Datase.................................... 44
Resumo:
Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.
Resumo:
Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.
Resumo:
MPLS (Multi-Protocol Label Switching) has recently emerged to facilitate the engineering of network traffic. This can be achieved by directing packet flows over paths that satisfy multiple requirements. MPLS has been regarded as an enhancement to traditional IP routing, which has the following problems: (1) all packets with the same IP destination address have to follow the same path through the network; and (2) paths have often been computed based on static and single link metrics. These problems may cause traffic concentration, and thus degradation in quality of service. In this paper, we investigate by simulations a range of routing solutions and examine the tradeoff between scalability and performance. At one extreme, IP packet routing using dynamic link metrics provides a stateless solution but may lead to routing oscillations. At the other extreme, we consider a recently proposed Profile-based Routing (PBR), which uses knowledge of potential ingress-egress pairs as well as the traffic profile among them. Minimum Interference Routing (MIRA) is another recently proposed MPLS-based scheme, which only exploits knowledge of potential ingress-egress pairs but not their traffic profile. MIRA and the more conventional widest-shortest path (WSP) routing represent alternative MPLS-based approaches on the spectrum of routing solutions. We compare these solutions in terms of utility, bandwidth acceptance ratio as well as their scalability (routing state and computational overhead) and load balancing capability. While the simplest of the per-flow algorithms we consider, the performance of WSP is close to dynamic per-packet routing, without the potential instabilities of dynamic routing.
Resumo:
The objective of unicast routing is to find a path from a source to a destination. Conventional routing has been used mainly to provide connectivity. It lacks the ability to provide any kind of service guarantees and smart usage of network resources. Improving performance is possible by being aware of both traffic characteristics and current available resources. This paper surveys a range of routing solutions, which can be categorized depending on the degree of the awareness of the algorithm: (1) QoS/Constraint-based routing solutions are aware of traffic requirements of individual connection requests; (2) Traffic-aware routing solutions assume knowledge of the location of communicating ingress-egress pairs and possibly the traffic demands among them; (3) Routing solutions that are both QoS-aware as (1) and traffic-aware as (2); (4) Best-effort solutions are oblivious to both traffic and QoS requirements, but are adaptive only to current resource availability. The best performance can be achieved by having all possible knowledge so that while finding a path for an individual flow, one can make a smart choice among feasible paths to increase the chances of supporting future requests. However, this usually comes at the cost of increased complexity and decreased scalability. In this paper, we discuss such cost-performance tradeoffs by surveying proposed heuristic solutions and hybrid approaches.
Resumo:
Forwarding in DTNs is a challenging problem. We focus on the specific issue of forwarding in an environment where mobile devices are carried by people in a restricted physical space (e.g. a conference) and contact patterns are not predictable. We show for the first time a path explosion phenomenon between most pairs of nodes. This means that, once the first path reaches the destination, the number of subsequent paths grows rapidly with time, so there usually exist many near-optimal paths. We study the path explosion phenomenon both analytically and empirically. Our results highlight the importance of unequal contact rates across nodes for understanding the performance of forwarding algorithms. We also find that a variety of well-known forwarding algorithms show surprisingly similar performance in our setting and we interpret this fact in light of the path explosion phenomenon.
Resumo:
In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.
Resumo:
Interdomain routing on the Internet is performed using route preference policies specified independently, and arbitrarily by each Autonomous System in the network. These policies are used in the border gateway protocol (BGP) by each AS when selecting next-hop choices for routes to each destination. Conflicts between policies used by different ASs can lead to routing instabilities that, potentially, cannot be resolved no matter how long BGP is run. The Stable Paths Problem (SPP) is an abstract graph theoretic model of the problem of selecting nexthop routes for a destination. A stable solution to the problem is a set of next-hop choices, one for each AS, that is compatible with the policies of each AS. In a stable solution each AS has selected its best next-hop given that the next-hop choices of all neighbors are fixed. BGP can be viewed as a distributed algorithm for solving SPP. In this report we consider the stable paths problem, as well as a family of restricted variants of the stable paths problem, which we call F stable paths problems. We show that two very simple variants of the stable paths problem are also NP-complete. In addition we show that for networks with a DAG topology, there is an efficient centralized algorithm to solve the stable paths problem, and that BGP always efficiently converges to a stable solution on such networks.
Resumo:
We propose an economic mechanism to reduce the incidence of malware that delivers spam. Earlier research proposed attention markets as a solution for unwanted messages, and showed they could provide more net benefit than alternatives such as filtering and taxes. Because it uses a currency system, Attention Bonds faces a challenge. Zombies, botnets, and various forms of malware might steal valuable currency instead of stealing unused CPU cycles. We resolve this problem by taking advantage of the fact that the spam-bot problem has been reduced to financial fraud. As such, the large body of existing work in that realm can be brought to bear. By drawing an analogy between sending and spending, we show how a market mechanism can detect and prevent spam malware. We prove that by using a currency (i) each instance of spam increases the probability of detecting infections, and (ii) the value of eradicating infections can justify insuring users against fraud. This approach attacks spam at the source, a virtue missing from filters that attack spam at the destination. Additionally, the exchange of currency provides signals of interest that can improve the targeting of ads. ISPs benefit from data management services and consumers benefit from the higher average value of messages they receive. We explore these and other secondary effects of attention markets, and find them to offer, on the whole, attractive economic benefits for all – including consumers, advertisers, and the ISPs.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.