974 resultados para Synthetic Traffic Generation
Resumo:
An intrinsic challenge associated with evaluating proposed techniques for detecting Distributed Denial-of-Service (DDoS) attacks and distinguishing them from Flash Events (FEs) is the extreme scarcity of publicly available real-word traffic traces. Those available are either heavily anonymised or too old to accurately reflect the current trends in DDoS attacks and FEs. This paper proposes a traffic generation and testbed framework for synthetically generating different types of realistic DDoS attacks, FEs and other benign traffic traces, and monitoring their effects on the target. Using only modest hardware resources, the proposed framework, consisting of a customised software traffic generator, ‘Botloader’, is capable of generating a configurable mix of two-way traffic, for emulating either large-scale DDoS attacks, FEs or benign traffic traces that are experimentally reproducible. Botloader uses IP-aliasing, a well-known technique available on most computing platforms, to create thousands of interactive UDP/TCP endpoints on a single computer, each bound to a unique IP-address, to emulate large numbers of simultaneous attackers or benign clients.
Resumo:
This work-in-progress paper presents an ensemble-based model for detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, and its partial implementation. The model utilises network traffic analysis and MIB (Management Information Base) server load analysis features for detecting a wide range of network and application layer DDoS attacks and distinguishing them from Flash Events. The proposed model will be evaluated against realistic synthetic network traffic generated using a software-based traffic generator that we have developed as part of this research. In this paper, we summarise our previous work, highlight the current work being undertaken along with preliminary results obtained and outline the future directions of our work.
Resumo:
This thesis investigates and develops techniques for accurately detecting Internet-based Distributed Denial-of-Service (DDoS) Attacks where an adversary harnesses the power of thousands of compromised machines to disrupt the normal operations of a Web-service provider, resulting in significant down-time and financial losses. This thesis also develops methods to differentiate these attacks from similar-looking benign surges in web-traffic known as Flash Events (FEs). This thesis also addresses an intrinsic challenge in research associated with DDoS attacks, namely, the extreme scarcity of public domain datasets (due to legal and privacy issues) by developing techniques to realistically emulate DDoS attack and FE traffic.
Resumo:
A common assumption made in traffic matrix (TM) modeling and estimation is independence of a packet's network ingress and egress. We argue that in real IP networks, this assumption should not and does not hold. The fact that most traffic consists of two-way exchanges of packets means that traffic streams flowing in opposite directions at any point in the network are not independent. In this paper we propose a model for traffic matrices based on independence of connections rather than packets. We argue that the independent connection (IC) model is more intuitive, and has a more direct connection to underlying network phenomena than the gravity model. To validate the IC model, we show that it fits real data better than the gravity model and that it works well as a prior in the TM estimation problem. We study the model's parameters empirically and identify useful stability properties. This justifies the use of the simpler versions of the model for TM applications. To illustrate the utility of the model we focus on two such applications: synthetic TM generation and TM estimation. To the best of our knowledge this is the first traffic matrix model that incorporates properties of bidirectional traffic.
Resumo:
This research discusses some of the issues encountered while developing a set of WGEN parameters for Chile and advice for others interested in developing WGEN parameters for arid climates. The WGEN program is a commonly used and a valuable research tool; however, it has specific limitations in arid climates that need careful consideration. These limitations are analysed in the context of generating a set of WGEN parameters for Chile. Fourteen to 26 years of precipitation data are used to calculate precipitation parameters for 18 locations in Chile, and 3–8 years of temperature and solar radiation data are analysed to generate parameters for seven of these locations. Results indicate that weather generation parameters in arid regions are sensitive to erroneous or missing precipitation data. Research shows that the WGEN-estimated gamma distribution shape parameter (α) for daily precipitation in arid zones will tend to cluster around discrete values of 0 or 1, masking the high sensitivity of these parameters to additional data. Rather than focus on the length in years when assessing the adequacy of a data record for estimation of precipitation parameters, researchers should focus on the number of wet days in dry months in a data set. Analysis of the WGEN routines for the estimation of temperature and solar radiation parameters indicates that errors can occur when individual ‘months’ have fewer than two wet days in the data set. Recommendations are provided to improve methods for estimation of WGEN parameters in arid climates.
Resumo:
Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.
Resumo:
Sustainable transport has become a necessity instead of an option, to address the problems of congestion and urban sprawl, whose effects include increased trip lengths and travel time. A more sustainable form of development, known as Transit Oriented Development (TOD) is presumed to offer sustainable travel choices with reduced need to travel to access daily destinations, by providing a mixture of land uses together with good quality of public transport service, infrastructure for walking and cycling. However, performance assessment of these developments with respect to travel characteristics of their inhabitants is required. This research proposes a five step methodology for evaluating the transport impacts of TODs. The steps for TOD evaluation include pre–TOD assessment, traffic and travel data collection, determination of traffic impacts, determination of travel impacts, and drawing outcomes. Typically, TODs are comprised of various land uses; hence have various types of users. Assessment of characteristics of all user groups is essential for obtaining an accurate picture of transport impacts. A case study TOD, Kelvin Grove Urban Village (KGUV), located 2km of north west of the Brisbane central business district in Australia was selected for implementing the proposed methodology and to evaluate the transport impacts of a TOD from an Australian perspective. The outcomes of this analysis indicated that KGUV generated 27 to 48 percent less traffic compared to standard published rates specified for homogeneous uses. Further, all user groups of KGUV used more sustainable modes of transport compared to regional and similarly located suburban users, with higher trip length for shopping and education trips. Although the results from this case study development support the transport claims of reduced traffic generation and sustainable travel choices by way of TODs, further investigation is required, considering different styles, scales and locations of TODs. The proposed methodology may be further refined by using results from new TODs and a framework for TOD evaluation may be developed.
Resumo:
INTRODUCTION: Currently available volar locking plates for the treatment of distal radius fractures incorporate at least two distal screw rows for fixation of the metaphyseal fragment and have a variable-angle locking mechanism which allows placement of the screws in various directions There is, however no evidence that these plates translate into better outcomes or have superior biomechanical properties to first generation plates, which had a single distal screw row and fixed-angle locking. The aim of our biomechanical study was to compare fixed-angle single-row plates with variable-angle multi-row plates to clarify the optimal number of locking screws. MATERIALS AND METHODS: Five different plate-screw combinations of three different manufacturers were tested, each group consisting of five synthetic fourth generation distal radius bones. An AO type C2 fracture was created and the fractures were plated according to each manufacturer's recommendations. The specimens then underwent cyclic and load-to-failure testing. An optical motion analysis system was used to detect displacement of fragments. RESULTS: No significant differences were detected after cyclic loading as well as after load-to-failure testing, neither in regard to axial deformation, implant rigidity or maximum displacement. The fixed-angle single-row plate showed the highest pre-test rigidity, least increase in post-testing rigidity and highest load-to-failure rigidity and least radial shortening. The radial shortening of plates with two distal screw rows was 3.1 and 4.3 times higher, respectively, than that of the fixed-angle single-row plate. CONCLUSION: The results of our study indicate that two distal screw rows do not add to construct rigidity and resistance against loss of reduction. Well conducted clinical studies based on the findings of biomechanical studies are necessary to determine the optimal number of screws necessary to achieve reproducibly good results in the treatment of distal radius fractures.
Resumo:
We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.
Resumo:
We present what we believe to be the first thorough characterization of live streaming media content delivered over the Internet. Our characterization of over five million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is to be contrasted to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to the stickiness of clients to a particular live object, as opposed to structural (size) properties of objects. Second, based on observations we make, we conjecture that the particular characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. In our study, this dependence is clear from the strong temporal correlations we observed in the traces, which we attribute to the synchronizing impact of live content on access characteristics. Based on our analyses, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO [19].
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)