904 resultados para ACCIDENTS, TRAFFIC
Resumo:
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.
Resumo:
A number of problems in network operations and engineering call for new methods of traffic analysis. While most existing traffic analysis methods are fundamentally temporal, there is a clear need for the analysis of traffic across multiple network links — that is, for spatial traffic analysis. In this paper we give examples of problems that can be addressed via spatial traffic analysis. We then propose a formal approach to spatial traffic analysis based on the wavelet transform. Our approach (graph wavelets) generalizes the traditional wavelet transform so that it can be applied to data elements connected via an arbitrary graph topology. We explore the necessary and desirable properties of this approach and consider some of its possible realizations. We then apply graph wavelets to measurements from an operating network. Our results show that graph wavelets are very useful for our motivating problems; for example, they can be used to form highly summarized views of an entire network's traffic load, to gain insight into a network's global traffic response to a link failure, and to localize the extent of a failure event within the network.
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Resumo:
This paper formally defines the operational semantic for TRAFFIC, a specification language for flow composition applications proposed in BUCS-TR-2005-014, and presents a type system based on desired safety assurance. We provide proofs on reduction (weak-confluence, strong-normalization and unique normal form), on soundness and completeness of type system with respect to reduction, and on equivalence classes of flow specifications. Finally, we provide a pseudo-code listing of a syntax-directed type checking algorithm implementing rules of the type system capable of inferring the type of a closed flow specification.
Resumo:
A common assumption made in traffic matrix (TM) modeling and estimation is independence of a packet's network ingress and egress. We argue that in real IP networks, this assumption should not and does not hold. The fact that most traffic consists of two-way exchanges of packets means that traffic streams flowing in opposite directions at any point in the network are not independent. In this paper we propose a model for traffic matrices based on independence of connections rather than packets. We argue that the independent connection (IC) model is more intuitive, and has a more direct connection to underlying network phenomena than the gravity model. To validate the IC model, we show that it fits real data better than the gravity model and that it works well as a prior in the TM estimation problem. We study the model's parameters empirically and identify useful stability properties. This justifies the use of the simpler versions of the model for TM applications. To illustrate the utility of the model we focus on two such applications: synthetic TM generation and TM estimation. To the best of our knowledge this is the first traffic matrix model that incorporates properties of bidirectional traffic.
Resumo:
We present a thorough characterization of the access patterns in blogspace -- a fast-growing constituent of the content available through the Internet -- which comprises a rich interconnected web of blog postings and comments by an increasingly prominent user community that collectively define what has become known as the blogosphere. Our characterization of over 35 million read, write, and administrative requests spanning a 28-day period is done from three different blogosphere perspectives. The server view characterizes the aggregate access patterns of all users to all blogs; the user view characterizes how individual users interact with blogosphere objects (blogs); the object view characterizes how individual blogs are accessed. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different in blogspace than that observed in traditional web content. Access to objects in blogspace could be conceived as part of an interaction between an author and its readership. As we show in our work, such interactions range from one-to-many "broadcast-type" and many-to-one "registration-type" communication between an author and its readers, to multi-way, iterative "parlor-type" dialogues among members of an interest group. This more-interactive nature of the blogosphere leads to interesting traffic and communication patterns, which are different from those observed in traditional web content. Second, we identify and characterize novel features of the blogosphere workload, and we investigate the similarities and differences between typical web server workloads and blogosphere server workloads. Given the increasing share of blogspace traffic, understanding such differences is important for capacity planning and traffic engineering purposes, for example.
Resumo:
We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.
Resumo:
The mathematical simulation of the evacuation process has a wide and largely untapped scope of application within the aircraft industry. The function of the mathematical model is to provide insight into complex behaviour by allowing designers, legislators, and investigators to ask ‘what if’ questions. Such a model, EXODUS, is currently under development, and this paper describes its evolution and potential applications. EXODUS is an egress model designed to simulate the evacuation of large numbers of individuals from an enclosure, such as an aircraft. The model tracks the trajectory of each individual as they make their way out of the enclosure or are overcome by fire hazards, such as heat and toxic gases. The software is expert system-based, the progressive motion and behaviour of each individual being determined by a set of heuristics or rules. EXODUS comprises five core interacting components: (i) the Movement Submodel — controls the physical movement of individual passengers from their current position to the most suitable neighbouring location; (ii) the Behaviour Submodel — determines an individual's response to the current prevailing situation; (iii) the Passenger Submodel — describes an individual as a collection of 22 defining attributes and variables; (iv) the Hazard Submodel — controls the atmospheric and physical environment; and (v) the Toxicity Submodel — determines the effects on an individual exposed to the fire products, heat, and narcotic gases through the Fractional Effective Dose calculations. These components are briefly described and their capabilities and limitations are demonstrated through comparison with experimental data and several hypothetical evacuation scenarios.
Resumo:
Traffic policing and bandwidth management strategies at the User Network Interface (UNI) of an ATM network are investigated by simulation. The network is assumed to transport real time (RT) traffic like voice and video as well as non-real time (non-RT) data traffic. The proposed policing function, called the super leaky bucket (S-LB), is based on the leaky bucket (LB), but handles the three types of traffic differently according to their quality of service (QoS) requirements. Separate queues are maintained for RT and non-RT traffic. They are normally served alternately, but if the number of RT cells exceeds a threshold, it gets non-pre-emptive priority. Further increase of the RT queue causes low priority cells to be discarded. Non-RT cells are buffered and the sources are throttled back during periods of congestion. The simulations clearly demonstrate the advantages of the proposed strategy in providing improved levels of service (delay, jitter and loss) for all types of traffic.
Resumo:
This paper describes the AASK database. The AASK database is unique as it is a record of human behaviour during survivable aviation accidents. The AASK database is compiled from interview data compiled by agencies such as the NTSB and the AAIB. The database can be found on the website http://fseg.gre.ac.uk
Resumo:
The Aircraft Accident Statistics and Knowledge (AASK) database is a repository of passenger accounts from survivable aviation accidents/incidents compiled from interview data collected by agencies such as the US NTSB. Its main purpose is to store observational and anecdotal data from the actual interviews of the occupants involved in aircraft accidents. The database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It also plays a significant role in the development of the airEXODUS aircraft evacuation model, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. This paper describes the latest version of the database (Version 4.0) and includes some analysis of passenger behavior during actual accidents/incidents.
Resumo:
A hotly debated issue in the area of aviation safety is the number of cabin crew members required to evacuate an aircraft in the event of an emergency. Most countries regulate the minimum number required for the safe operation of an aircraft, but these rulings are based on little if any scientific evidence. Another issue of concern is the failure rate of exits and slides. This paper examines these issues using the latest version of Aircraft Accident Statistics and Knowledge database AASK V4.0, which contains information from 105 survivable crashes and more than 2,000 survivors, including accounts from 155 cabin crew members.
Resumo:
This report concerns the development of the AASK V4.0 database (CAA Project 560/SRG/R+AD). AASK is the Aircraft Accident Statistics and Knowledge database, which is a repository of survivor accounts from aviation accidents. Its main purpose is to store observational and anecdotal data from interviews of the occupants involved in aircraft accidents. The AASK database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It is also key to the development of aircraft evacuation models such as airEXODUS, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. With support from the UK CAA (Project 277/SRG/R&AD), AASK V3.0 was developed. This was an on-line prototype system available over the internet to selected users and included a significantly increased number of passenger accounts compared with earlier versions, the introduction of cabin crew accounts, the introduction of fatality information and improved functionality through the seat plan viewer utility. The most recently completed AASK project (Project 560/SRG/R+AD) involved four main components: a) analysis of the data collected in V3.0; b) continued collection and entry of data into AASK; c) maintenance and functional development of the AASK database; and d) user feedback survey. All four components have been pursued and completed in this two-year project. The current version developed in the last year of the project is referred to as AASK V4.0. This report provides summaries of the work done and the results obtained in relation to the project deliverables.