25 resultados para network traffic analysis
em Universidad Politécnica de Madrid
Resumo:
Knowledge of the development of hydrographic networks can be useful for a number of research works in hydraulic engineering. We thus, intend to analyse the cartography regarding the first work that systematically encompasses the entire hydrographic network: Tomas Lopez’s Geographic Atlas of Spain (1787). In order to achieve this goal, we will first analyze –by way of the Geographic Information System (GIS) – both the present and referred historical cartographies. In comparing them, we will use the then-existing population centres that correspond to modern ones. The aim is to compare the following research variables in the hydrographic network: former toponyms, length of riverbeds and distance to population centres. The results of this study will show the variation in the riverbeds and the probable change in their denomination.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
As it is defined in ATM 2000+ Strategy (Eurocontrol 2001), the mission of the Air Traffic Management (ATM) System is: “For all the phases of a flight, the ATM system should facilitate a safe, efficient, and expedite traffic flow, through the provision of adaptable ATM services that can be dimensioned in relation to the requirements of all the users and areas of the European air space. The ATM services should comply with the demand, be compatible, operate under uniform principles, respect the environment and satisfy the national security requirements.” The objective of this paper is to present a methodology designed to evaluate the status of the ATM system in terms of the relationship between the offered capacity and traffic demand, identifying weakness areas and proposing solutions. The first part of the methodology relates to the characterization and evaluation of the current system, while a second part proposes an approach to analyze the possible development limit. As part of the work, general criteria are established to define the framework in which the analysis and diagnostic methodology presented is placed. They are: the use of Air Traffic Control (ATC) sectors as analysis unit, the presence of network effects, the tactical focus, the relative character of the analysis, objectivity and a high level assessment that allows assumptions on the human and Communications, Navigation and Surveillance (CNS) elements, considered as the typical high density air traffic resources. The steps followed by the methodology start with the definition of indicators and metrics, like the nominal criticality or the nominal efficiency of a sector; scenario characterization where the necessary data is collected; network effects analysis to study the relations among the constitutive elements of the ATC system; diagnostic by means of the “System Status Diagram”; analytical study of the ATC system development limit; and finally, formulation of conclusions and proposal for improvement. This methodology was employed by Aena (Spanish Airports Manager and Air Navigation Service Provider) and INECO (Spanish Transport Engineering Company) in the analysis of the Spanish ATM System in the frame of the Spanish airspace capacity sustainability program, although it could be applied elsewhere.
Resumo:
Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
La demanda de contenidos de vídeo ha aumentado rápidamente en los últimos años como resultado del gran despliegue de la TV sobre IP (IPTV) y la variedad de servicios ofrecidos por los operadores de red. Uno de los servicios que se ha vuelto especialmente atractivo para los clientes es el vídeo bajo demanda (VoD) en tiempo real, ya que ofrece una transmisión (streaming) inmediata de gran variedad de contenidos de vídeo. El precio que los operadores tienen que pagar por este servicio es el aumento del tráfico en las redes, que están cada vez más congestionadas debido a la mayor demanda de contenidos de VoD y al aumento de la calidad de los propios contenidos de vídeo. Así, uno de los principales objetivos de esta tesis es encontrar soluciones que reduzcan el tráfico en el núcleo de la red, manteniendo la calidad del servicio en el nivel adecuado y reduciendo el coste del tráfico. La tesis propone un sistema jerárquico de servidores de streaming en el que se ejecuta un algoritmo para la ubicación óptima de los contenidos de acuerdo con el comportamiento de los usuarios y el estado de la red. Debido a que cualquier algoritmo óptimo de distribución de contenidos alcanza un límite en el que no se puede llegar a nuevas mejoras, la inclusión de los propios clientes del servicio (los peers) en el proceso de streaming puede reducir aún más el tráfico de red. Este proceso se logra aprovechando el control que el operador tiene en las redes de gestión privada sobre los equipos receptores (Set-Top Box) ubicados en las instalaciones de los clientes. El operador se reserva cierta capacidad de almacenamiento y streaming de los peers para almacenar los contenidos de vídeo y para transmitirlos a otros clientes con el fin de aliviar a los servidores de streaming. Debido a la incapacidad de los peers para sustituir completamente a los servidores de streaming, la tesis propone un sistema de streaming asistido por peers. Algunas de las cuestiones importantes que se abordan en la tesis son saber cómo los parámetros del sistema y las distintas distribuciones de los contenidos de vídeo en los peers afectan al rendimiento general del sistema. Para dar respuesta a estas preguntas, la tesis propone un modelo estocástico preciso y flexible que tiene en cuenta parámetros como las capacidades de enlace de subida y de almacenamiento de los peers, el número de peers, el tamaño de la biblioteca de contenidos de vídeo, el tamaño de los contenidos y el esquema de distribución de contenidos para estimar los beneficios del streaming asistido por los peers. El trabajo también propone una versión extendida del modelo matemático mediante la inclusión de la probabilidad de fallo de los peers y su tiempo de recuperación en el conjunto de parámetros del modelo. Estos modelos se utilizan como una herramienta para la realización de exhaustivos análisis del sistema de streaming de VoD asistido por los peers para la amplia gama de parámetros definidos en los modelos. Abstract The demand of video contents has rapidly increased in the past years as a result of the wide deployment of IPTV and the variety of services offered by the network operators. One of the services that has especially become attractive to the customers is real-time Video on Demand (VoD) because it offers an immediate streaming of a large variety of video contents. The price that the operators have to pay for this convenience is the increased traffic in the networks, which are becoming more congested due to the higher demand for VoD contents and the increased quality of the videos. Therefore, one of the main objectives of this thesis is finding solutions that would reduce the traffic in the core of the network, keeping the quality of service on satisfactory level and reducing the traffic cost. The thesis proposes a system of hierarchical structure of streaming servers that runs an algorithm for optimal placement of the contents according to the users’ behavior and the state of the network. Since any algorithm for optimal content distribution reaches a limit upon which no further improvements can be made, including service customers themselves (the peers) in the streaming process can further reduce the network traffic. This process is achieved by taking advantage of the control that the operator has in the privately managed networks over the Set-Top Boxes placed at the clients’ premises. The operator reserves certain storage and streaming capacity on the peers to store the video contents and to stream them to the other clients in order to alleviate the streaming servers. Because of the inability of the peers to completely substitute the streaming servers, the thesis proposes a system for peer-assisted streaming. Some of the important questions addressed in the thesis are how the system parameters and the various distributions of the video contents on the peers would impact the overall system performance. In order to give answers to these questions, the thesis proposes a precise and flexible stochastic model that takes into consideration parameters like uplink and storage capacity of the peers, number of peers, size of the video content library, size of contents and content distribution scheme to estimate the benefits of the peer-assisted streaming. The work also proposes an extended version of the mathematical model by including the failure probability of the peers and their recovery time in the set of parameters. These models are used as tools for conducting thorough analyses of the peer-assisted system for VoD streaming for the wide range of defined parameters.
Resumo:
This paper describes ExperNet, an intelligent multi-agent system that was developed under an EU funded project to assist in the management of a large-scale data network. ExperNet assists network operators at various nodes of a WAN to detect and diagnose hardware failures and network traffic problems and suggests the most feasible solution, through a web-based interface. ExperNet is composed by intelligent agents, capable of both local problem solving and social interaction among them for coordinating problem diagnosis and repair. The current network state is captured and maintained by conventional network management and monitoring software components, which have been smoothly integrated into the system through sophisticated information exchange interfaces. For the implementation of the agents, a distributed Prolog system enhanced with networking facilities was developed. The agents’ knowledge base is developed in an extensible and reactive knowledge base system capable of handling multiple types of knowledge representation. ExperNet has been developed, installed and tested successfully in an experimental network zone of Ukraine.
Resumo:
Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.
Resumo:
Improving the knowledge of demand evolution over time is a key aspect in the evaluation of transport policies and in forecasting future investment needs. It becomes even more critical for the case of toll roads, which in recent decades has become an increasingly common device to fund road projects. However, literature regarding demand elasticity estimates in toll roads is sparse and leaves some important aspects to be analyzed in greater detail. In particular, previous research on traffic analysis does not often disaggregate heavy vehicle demand from the total volume, so that the specific behavioral patternsof this traffic segment are not taken into account. Furthermore, GDP is the main socioeconomic variable most commonly chosen to explain road freight traffic growth over time. This paper seeks to determine the variables that better explain the evolution of heavy vehicle demand in toll roads over time. To that end, we present a dynamic panel data methodology aimed at identifying the key socioeconomic variables that explain the behavior of road freight traffic throughout the years. The results show that, despite the usual practice, GDP may not constitute a suitable explanatory variable for heavy vehicle demand. Rather, considering only the GDP of those sectors with a high impact on transport demand, such as construction or industry, leads to more consistent results. The methodology is applied to Spanish toll roads for the 1990?2011 period. This is an interesting case in the international context, as road freight demand has experienced an even greater reduction in Spain than elsewhere, since the beginning of the economic crisis in 2008.
Resumo:
Public Private Partnerships (PPPs) are mostly implemented to circumvent budgetary constraints, and to encourage efficiency and quality in the provision of public infrastructure in order to reach social welfare. One of the ways of reaching the latter objective is by the introduction of performance based standards tied to bonuses and penalties to reward or punish the performance of the contractor. This paper focuses on the implementation of safety based incentives in PPPs in such a way that the better the safety outcome the greater larger will be the economic reward to the contractor. The main aim of this paper is to identify whether the incentives to improve road safety in PPPs are ultimately effective in improving safety ratios in Spain. To that end, Poisson and negative binomial regression models have been applied using information of motorways of the Spanish network of 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.
Resumo:
IP multicast allows the efficient support of group communication services by reducing the number of IP flows needed for such communication. The increasing generalization in the use of multicast has also triggered the need for supporting IP multicast in mobile environments. Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management solution, where the functionality to support the terminal movement resides in the network. Recently, a baseline solution has been adopted for multicast support in PMIPv6. Such base solution has inefficiencies in multicast routing because it may require multiple copies of a single stream to be received by the same access gateway. Nevertheless, there is an alternative solution to support multicast in PMIPv6 that avoids this issue. This paper evaluates by simulation the scalability of both solutions under realistic conditions, and provides an analysis of the sensitivity of the two proposals against a number of parameters.
Resumo:
This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure
Resumo:
Public Private Partnerships (PPPs) are mostly implemented for three reasons: to circumvent budgetary constraints, encourage efficiency and improvement of quality in the provision of public infrastructure. One of the ways of reaching the latter objective is by the introduction of performance-based standards tied to bonuses and penalties to reward or punish the performance of the contractor. These performance based standards often refer to different aspects such as technical, environmental and safety issues. This paper focuses on the implementation of safety based incentives in PPPs. The main aim of this paper is to analyze whether the incentives to improve road safety in PPPs are effective in improving safety ratios in Spain. To this end, negative binomial regression models have been applied using information from the Spanish high capacity network in 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles in the highway, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.
Resumo:
Concession contracts in highways often include some kind of clauses (for example, a minimum traffic guarantee) that allow for better management of the business risks. The value of these clauses may be important and should be added to the total value of the concession. However, in these cases, traditional valuation techniques, like the NPV (net present value) of the project, are insufficient. An alternative methodology for the valuation of highway concession is one based on the real options approach. This methodology is generally built on the assumption of the evolution of traffic volume as a GBM (geometric Brownian motion), which is the hypothesis analyzed in this paper. First, a description of the methodology used for the analysis of the existence of unit roots (i.e., the hypothesis of non-stationarity) is provided. The Dickey-Fuller approach has been used, which is the most common test for this kind of analysis. Then this methodology is applied to perform a statistical analysis of traffic series in Spanish toll highways. For this purpose, data on the AADT (annual average daily traffic) on a set of highways have been used. The period of analysis is around thirty years in most cases. The main outcome of the research is that the hypothesis that traffic volume follows a GBM process in Spanish toll highways cannot be rejected. This result is robust, and therefore it can be used as a starting point for the application of the real options theory to assess toll highway concessions.
Resumo:
Ejercicio de Análisis de Redes con Network Analysis de ArcGIS 10