911 resultados para Photography in traffic engineering.
Resumo:
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.
Resumo:
We present a thorough characterization of the access patterns in blogspace, which comprises a rich interconnected web of blog postings and comments by an increasingly prominent user community that collectively define what has become known as the blogosphere. Our characterization of over 35 million read, write, and management requests spanning a 28-day period is done at three different levels. The user view characterizes how individual users interact with blogosphere objects (blogs); the object view characterizes how individual blogs are accessed; the server view characterizes the aggregate access patterns of all users to all blogs. The more-interactive nature of the blogosphere leads to interesting traffic and communication patterns, which are different from those observed for traditional web content. We identify and characterize novel features of the blogosphere workload, and we show the similarities and differences between typical web server workloads and blogosphere server workloads. Finally, based on our main characterization results, we build a new synthetic blogosphere workload generator called GBLOT, which aims at mimicking closely a stream of requests originating from a population of blog users. Given the increasing share of blogspace traffic, realistic workload models and tools are important for capacity planning and traffic engineering purposes.
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
A common assumption made in traffic matrix (TM) modeling and estimation is independence of a packet's network ingress and egress. We argue that in real IP networks, this assumption should not and does not hold. The fact that most traffic consists of two-way exchanges of packets means that traffic streams flowing in opposite directions at any point in the network are not independent. In this paper we propose a model for traffic matrices based on independence of connections rather than packets. We argue that the independent connection (IC) model is more intuitive, and has a more direct connection to underlying network phenomena than the gravity model. To validate the IC model, we show that it fits real data better than the gravity model and that it works well as a prior in the TM estimation problem. We study the model's parameters empirically and identify useful stability properties. This justifies the use of the simpler versions of the model for TM applications. To illustrate the utility of the model we focus on two such applications: synthetic TM generation and TM estimation. To the best of our knowledge this is the first traffic matrix model that incorporates properties of bidirectional traffic.
Resumo:
© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.
Resumo:
FUELCON is an expert system in nuclear engineering. Its task is optimized refueling-design, which is crucial to keep down operation costs at a plant. FUELCON proposes sets of alternative configurations of fuel-allocation; the fuel is positioned in a grid representing the core of a reactor. The practitioner of in-core fuel management uses FUELCON to generate a reasonably good configuration for the situation at hand. The domain expert, on the other hand, resorts to the system to test heuristics and discover new ones, for the task described above. Expert use involves a manual phase of revising the ruleset, based on performance during previous iterations in the same session. This paper is concerned with a new phase: the design of a neural component to carry out the revision automatically. Such an automated revision considers previous performance of the system and uses it for adaptation and learning better rules. The neural component is based on a particular schema for a symbolic to recurrent-analogue bridge, called NIPPL, and on the reinforcement learning of neural networks for the adaptation.
Resumo:
The corrosion of steel reinforcement bars in reinforced concrete structures exposed to severe marine environments usually is attributed to the aggressive nature of chloride ions. In some cases in practice corrosion has been observed to commence already within a few years of exposure even with considerable concrete cover to the reinforcement and apparently high quality concretes. However, there are a number of other cases in practice for which corrosion initiation took much longer, even in cases with quite modest concrete cover and modest concrete quality. Many of these structures show satisfactory long-term structural performance, despite having high levels of localized chloride concentrations at the reinforcement. This disparity was noted already more than 50 years ago, but appears still not fully explained. This paper presents a systematic overview of cases reported in the engineering and corrosion literature and considers possible reasons for these differences. Consistent with observations by others, the data show that concretes made from blast furnace cements have better corrosion durability properties. The data also strongly suggest that concretes made with limestone or non-reactive dolomite aggregates or sufficiently high levels of other forms of calcium carbonates have favourable reinforcement corrosion properties. Both corrosion initiation and the onset of significant damage are delayed. Some possible reasons for this are explored briefly.
Resumo:
The primary intention of this paper is to review the current state of the art in engineering cost modelling as applied to aerospace. This is a topic of current interest and in addressing the literature, the presented work also sets out some of the recognised definitions of cost that relate to the engineering domain. The paper does not attempt to address the higher-level financial sector but rather focuses on the costing issues directly relevant to the engineering process, primarily those of design and manufacture. This is of more contemporary interest as there is now a shift towards the analysis of the influence of cost, as defined in more engineering related terms; in an attempt to link into integrated product and process development (IPPD) within a concurrent engineering environment. Consequently, the cost definitions are reviewed in the context of the nature of cost as applicable to the engineering process stages: from bidding through to design, to manufacture, to procurement and ultimately, to operation. The linkage and integration of design and manufacture is addressed in some detail. This leads naturally to the concept of engineers influencing and controlling cost within their own domain rather than trusting this to financers who have little control over the cause of cost. In terms of influence, the engineer creates the potential for cost and in a concurrent environment this requires models that integrate cost into the decision making process.
Resumo:
A new universal flow map has been developed for two-phase co-current flow. The map has been successfully tested against wide variety of data. Flow regime transition predictors suggested by other authors have been shown to be useful. New transitional models are proposed for the stratified to annular regimes, blow through slug and intermittent regimes.