934 resultados para IP addresses
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.
Resumo:
Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
A Flash Event (FE) represents a period of time when a web-server experiences a dramatic increase in incoming traffic, either following a newsworthy event that has prompted users to locate and access it, or as a result of redirection from other popular web or social media sites. This usually leads to network congestion and Quality-of-Service (QoS) degradation. These events can be mistaken for Distributed Denial-of-Service (DDoS) attacks aimed at disrupting the server. Accurate detection of FEs and their distinction from DDoS attacks is important, since different actions need to be undertaken by network administrators in these two cases. However, lack of public domain FE datasets hinders research in this area. In this paper we present a detailed study of flash events and classify them into three broad categories. In addition, the paper describes FEs in terms of three key components: the volume of incoming traffic, the related source IP-addresses, and the resources being accessed. We present such a FE model with minimal parameters and use publicly available datasets to analyse and validate our proposed model. The model can be used to generate different types of FE traffic, closely approximating real-world scenarios, in order to facilitate research into distinguishing FEs from DDoS attacks.
Resumo:
Computer worms represent a serious threat for modern communication infrastructures. These epidemics can cause great damage such as financial losses or interruption of critical services which support lives of citizens. These worms can spread with a speed which prevents instant human intervention. Therefore automatic detection and mitigation techniques need to be developed. However, if these techniques are not designed and intensively tested in realistic environments, they may cause even more harm as they heavily interfere with high volume communication flows. We present a simulation model which allows studies of worm spread and counter measures in large scale multi-AS topologies with millions of IP addresses.
Resumo:
Hoje em dia, distribuições de grandes volumes de dados por redes TCP/IP corporativas trazem problemas como a alta utilização da rede e de servidores, longos períodos para conclusão e maior sensibilidade a falhas na infraestrutura de rede. Estes problemas podem ser reduzidos com utilização de redes par-a-par (P2P). O objetivo desta dissertação é analisar o desempenho do protocolo BitTorrent padrão em redes corporativas e também realizar a análise após uma modificação no comportamento padrão do protocolo BitTorrent. Nesta modificação, o rastreador identifica o endereço IP do par que está solicitando a lista de endereços IP do enxame e envia somente aqueles pertencentes à mesma rede local e ao semeador original, com o objetivo de reduzir o tráfego em redes de longa distância. Em cenários corporativos típicos, as simulações mostraram que a alteração é capaz de reduzir o consumo médio de banda e o tempo médio dos downloads, quando comparados ao BitTorrent padrão, além de conferir maior robustez à distribuição em casos de falhas em enlaces de longa distância. As simulações mostraram também que em ambientes mais complexos, com muitos clientes, e onde a restrição de banda em enlaces de longa distância provoca congestionamento e descartes, o desempenho do protocolo BitTorrent padrão pode ser semelhante a uma distribuição em arquitetura cliente-servidor. Neste último caso, a modificação proposta mostrou resultados consistentes de melhoria do desempenho da distribuição.
Resumo:
In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Resumo:
Langattomien lähiverkkojen yleistyessä nopeasti suurten verkkojen teknologiana käyttösääntöjen valvonta tulee tarpeelliseksi. Tässä työssä kuvataan, kuinka käyttäjät voidaan pakottaa noudattamaan käyttösääntöjä julkisissa WLAN-verkoissa. Työssä käsiteltävät ongelmat koskevat menetelmiä epäluotettavien DHCP-palvelinten paljastamiseksi sekä omia IP-osoitteita käyttävien käyttäjien paljastamiseksi tilanteissa, jolloin IP-osoite ei ole virallisen DHCP-palvelimen myöntämä. Jokaisen menetelmän kohdalla pohditaan, kuinka tällaisia käyttäjiä voidaan estää rikkomasta käyttösääntöjä. Lisäksi pohditaan keskitetyn tietojen keruun hyödyntämistä kuvattujen tehtävien suorittamiseksi. Esitetyt ratkaisut on erityisesti suunniteltu testiverkkoa varten, mutta yleiset ideat ovat toimivia missä tahansa langattomassa verkossa.
Resumo:
L’utilisation d’Internet prend beaucoup d’ampleur depuis quelques années et le commerce électronique connaît une hausse considérable. Nous pouvons présentement acheter facilement via Internet sans quitter notre domicile et avons accès à d’innombrables sources d’information. Cependant, la navigation sur Internet permet également la création de bases de données détaillées décrivant les habitudes de chaque utilisateur, informations ensuite utilisées par des tiers afin de cerner le profil de leur clientèle cible, ce qui inquiète plusieurs intervenants. Les informations concernant un individu peuvent être récoltées par l’interception de données transactionnelles, par l’espionnage en ligne, ainsi que par l’enregistrement d’adresses IP. Afin de résoudre les problèmes de vie privée et de s’assurer que les commerçants respectent la législation applicable en la matière, ainsi que les exigences mises de l’avant par la Commission européenne, plusieurs entreprises comme Zero-knowledge Systems Inc. et Anonymizer.com offrent des logiciels permettant la protection de la vie privée en ligne (privacy-enhancing technologies ou PETs). Ces programmes utilisent le cryptage d’information, une méthode rendant les données illisibles pour tous à l’exception du destinataire. L’objectif de la technologie utilisée a été de créer des systèmes mathématiques rigoureux pouvant empêcher la découverte de l’identité de l’auteur même par le plus déterminé des pirates, diminuant ainsi les risques de vol d’information ou la divulgation accidentelle de données confidentielles. Malgré le fait que ces logiciels de protection de la vie privée permettent un plus grand respect des Directives européennes en la matière, une analyse plus approfondie du sujet témoigne du fait que ces technologies pourraient être contraires aux lois concernant le cryptage en droit canadien, américain et français.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Resumo:
Telecommunications networks have been always expanding and thanks to it, new services have appeared. The old mechanisms for carrying packets have become obsolete due to the new service requirements, which have begun working in real time. Real time traffic requires strict service guarantees. When this traffic is sent through the network, enough resources must be given in order to avoid delays and information losses. When browsing through the Internet and requesting web pages, data must be sent from a server to the user. If during the transmission there is any packet drop, the packet is sent again. For the end user, it does not matter if the webpage loads in one or two seconds more. But if the user is maintaining a conversation with a VoIP program, such as Skype, one or two seconds of delay in the conversation may be catastrophic, and none of them can understand the other. In order to provide support for this new services, the networks have to evolve. For this purpose MPLS and QoS were developed. MPLS is a packet carrying mechanism used in high performance telecommunication networks which directs and carries data using pre-established paths. Now, packets are forwarded on the basis of labels, making this process faster than routing the packets with the IP addresses. MPLS also supports Traffic Engineering (TE). This refers to the process of selecting the best paths for data traffic in order to balance the traffic load between the different links. In a network with multiple paths, routing algorithms calculate the shortest one, and most of the times all traffic is directed through it, causing overload and packet drops, without distributing the packets in the other paths that the network offers and do not have any traffic. But this is not enough in order to provide the real time traffic the guarantees it needs. In fact, those mechanisms improve the network, but they do not make changes in how the traffic is treated. That is why Quality of Service (QoS) was developed. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. Traffic is distributed into different classes and each of them is treated differently, according to its Service Level Agreement (SLA). Traffic with the highest priority will have the preference over lower classes, but this does not mean it will monopolize all the resources. In order to achieve this goal, a set policies are defined to control and alter how the traffic flows. Possibilities are endless, and it depends in how the network must be structured. By using those mechanisms it is possible to provide the necessary guarantees to the real-time traffic, distributing it between categories inside the network and offering the best service for both real time data and non real time data. Las Redes de Telecomunicaciones siempre han estado en expansión y han propiciado la aparición de nuevos servicios. Los viejos mecanismos para transportar paquetes se han quedado obsoletos debido a las exigencias de los nuevos servicios, que han comenzado a operar en tiempo real. El tráfico en tiempo real requiere de unas estrictas garantías de servicio. Cuando este tráfico se envía a través de la red, necesita disponer de suficientes recursos para evitar retrasos y pérdidas de información. Cuando se navega por la red y se solicitan páginas web, los datos viajan desde un servidor hasta el usuario. Si durante la transmisión se pierde algún paquete, éste se vuelve a mandar de nuevo. Para el usuario final, no importa si la página tarda uno o dos segundos más en cargar. Ahora bien, si el usuario está manteniendo una conversación usando algún programa de VoIP (como por ejemplo Skype) uno o dos segundos de retardo en la conversación podrían ser catastróficos, y ninguno de los interlocutores sería capaz de entender al otro. Para poder dar soporte a estos nuevos servicios, las redes deben evolucionar. Para este propósito se han concebido MPLS y QoS MPLS es un mecanismo de transporte de paquetes que se usa en redes de telecomunicaciones de alto rendimiento que dirige y transporta los datos de acuerdo a caminos preestablecidos. Ahora los paquetes se encaminan en función de unas etiquetas, lo cual hace que sea mucho más rápido que encaminar los paquetes usando las direcciones IP. MPLS también soporta Ingeniería de Tráfico (TE). Consiste en seleccionar los mejores caminos para el tráfico de datos con el objetivo de balancear la carga entre los diferentes enlaces. En una red con múltiples caminos, los algoritmos de enrutamiento actuales calculan el camino más corto, y muchas veces el tráfico se dirige sólo por éste, saturando el canal, mientras que otras rutas se quedan completamente desocupadas. Ahora bien, esto no es suficiente para ofrecer al tráfico en tiempo real las garantías que necesita. De hecho, estos mecanismos mejoran la red, pero no realizan cambios a la hora de tratar el tráfico. Por esto es por lo que se ha desarrollado el concepto de Calidad de Servicio (QoS). La calidad de servicio es la capacidad para ofrecer diferentes prioridades a las diferentes aplicaciones, usuarios o flujos de datos, y para garantizar un cierto nivel de rendimiento en un flujo de datos. El tráfico se distribuye en diferentes clases y cada una de ellas se trata de forma diferente, de acuerdo a las especificaciones que se indiquen en su Contrato de Tráfico (SLA). EL tráfico con mayor prioridad tendrá preferencia sobre el resto, pero esto no significa que acapare la totalidad de los recursos. Para poder alcanzar estos objetivos se definen una serie de políticas para controlar y alterar el comportamiento del tráfico. Las posibilidades son inmensas dependiendo de cómo se quiera estructurar la red. Usando estos mecanismos se pueden proporcionar las garantías necesarias al tráfico en tiempo real, distribuyéndolo en categorías dentro de la red y ofreciendo el mejor servicio posible tanto a los datos en tiempo real como a los que no lo son.
Resumo:
Our research described in this paper identifies a three part premise relating to the spyware paradigm. Firstly the data suggests spyware is proliferating at an exponential rate. Secondly ongoing research confirms that spyware produces many security risks – including that of privacy/confidentiality breaches via illicit data collection and reporting. Thirdly, anti-spyware controls are improving but are still considered problematic for several reasons. Our research then concludes that control measures to counter this very significant challenge should merit compliance auditing – and this auditing may effectively target the vital message passing performed by all illicit data collection spyware. Our research then evolves into an experiment involving the design and implementation of a software audit tool to conduct the desired compliance auditing. The software audit tool is positioned at the protected network’s gateway. The software audit tool uses ‘phone-home’ IP addresses as spyware signatures to detect the presence of the offending software. The audit tool also has the capability to differentiate legitimate message passing software from that produced by spyware – and ‘learn’ both new spyware signatures and new legitimate message passing profiles. The testing stage of the software has proven successful – albeit using very limited levels of network message passing variety and frequency.