961 resultados para OPERATORS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any q > 1, let MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by q. We show that for any q,t > 1, MOD_q is equivalent to MOD_t (up to constant depth). Based on the case q=2, Moore has shown that quantum analogs of AC^(0), ACC[q], and ACC, denoted QAC^(0)_wf, QACC[2], QACC respectively, define the same class of operators, leaving q > 2 as an open question. Our result resolves this question, implying that QAC^(0)_wf = QACC[q] = QACC for all q. We also prove the first upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC (both for arbitrary complex amplitudes) and BQACC (for rational number amplitudes) and show that they are all contained in TC^(0). To do this, we show that a TC^(0) circuit can keep track of the amplitudes of the state resulting from the application of a QACC operator using a constant width polynomial size tensor sum. In order to accomplish this, we also show that TC^(0) can perform iterated addition and multiplication in certain field extensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wind energy is predominantly a nonsynchronous generation source. Large-scale integration of wind generation with existing electricity systems, therefore, presents challenges in maintaining system frequency stability and local voltage stability. Transmission system operators have implemented system operational constraints (SOCs) in order to maintain stability with high wind generation, but imposition of these constraints results in higher operating costs. A mixed integer programming tool was used to simulate generator dispatch in order to assess the impact of various SOCs on generation costs. Interleaved day-ahead scheduling and real-time dispatch models were developed to allow accurate representation of forced outages and wind forecast errors, and were applied to the proposed Irish power system of 2020 with a wind penetration of 32%. Savings of at least 7.8% in generation costs and reductions in wind curtailment of 50% were identified when the most influential SOCs were relaxed. The results also illustrate the need to relax local SOCs together with the system-wide nonsynchronous penetration limit SOC, as savings from increasing the nonsynchronous limit beyond 70% were restricted without relaxation of local SOCs. The methodology and results allow for quantification of the costs of SOCs, allowing the optimal upgrade path for generation and transmission infrastructure to be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cost of electricity, a major operating cost of municipal wastewater treatment plants, is related to influent flow rate, power price, and power load. With knowledge of inflow and price patterns, plant operators can manage processes to reduce electricity costs. Records of influent flow, power price, and load are evaluated for Blue Plains Advanced Wastewater Treatment Plant. Diurnal and seasonal trends are analyzed. Power usage is broken down among treatment processes. A simulation model of influent pumping, a large power user, is developed. It predicts pump discharge and power usage based on wet-well level. Individual pump characteristics are tested in the plant. The model accurately simulates plant inflow and power use for two pumping stations [R2 = 0.68, 0.93 (inflow), R2 =0.94, 0.91(power)]. Wet-well stage-storage relationship is estimated from data. Time-varying wet-well level is added to the model. A synthetic example demonstrates application in managing pumps to reduce electricity cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a knowledge-based approach is proposed for the management of temporal information in process control. A common-sense theory of temporal constraints over processes/events, allowing relative temporal knowledge, is employed here as the temporal basis for the system. This theory supports duration reasoning and consistency checking, and accepts relative temporal knowledge which is in a form normally used by human operators. An architecture for process control is proposed which centres on an historical database consisting of events and processes, together with the qualitative temporal relationships between their occurrences. The dynamics of the system is expressed by means of three types of rule: database updating rules, process control rules, and data deletion rules. An example is provided in the form of a life scheduler, to illustrate the database and the rule sets. The example demonstrates the transitions of the database over time, and identifies the procedure in terms of a state transition model for the application. The dividing instant problem for logical inference is discussed with reference to this process control example, and it is shown how the temporal theory employed can be used to deal with the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Logic-based models are thriving within artificial intelligence. A great number of new logics have been defined, and their theory investigated. Epistemic logics introduce modal operators for knowledge or belief; deontic logics are about norms, and introduce operators of deontic necessity and possibility (i.e., obligation or prohibition). And then we have a much investigated class—temporal logics—to whose application to engineering this special issue is devoted. This kind of formalism deserves increased widespread recognition and application in engineering, a domain where other kinds of temporal models (e.g., Petri nets) are by now a fairly standard part of the modelling toolbox.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are three main approaches to the representation of temporal information in AI literature: the so-called method of temporal arguments that simply extends functions and predicates of first-order language to include time as the additional argument; modal temporal logics which are extensions ofthe propositional or predicate calculus with modal temporal operators; and reified temporal logics which reify standard propositions of some initial language (e.g., the classical first-order or modal logic) as objects denoting propositional terms. The objective of this paper is to provide an overview onthe temporal reified approach by looking closely atsome representative existing systems featuring reified propositions, including those of Allen, McDermott, Shoham, Reichgelt, Galton, and Ma and Knight. We shall demonstrate that, although reified logics might be more complicated in expressing assertions about some given objects with respect to different times, they accord a special status to time and therefore have several distinct advantages in talking about some important issues which would be difficult (if not impossible) to express in other approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes new crossover operators and mutation strategies for the FUELGEN system, a genetic algorithm which designs fuel loading patterns for nuclear power reactors. The new components are applications of new ideas from recent research in genetic algorithms. They are designed to improve the performance of FUELGEN by using information in the problem as yet not made explicit in the genetic algorithm's representation. The paper introduces new developments in genetic algorithm design and explains how they motivate the proposed new components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very Large Transport Aircraft (VLTA) pose considerable challenges to designers, operators and certification authorities. Questions concerning seating arrangement, nature and design of recreational space, the number, design and location of internal staircases, the number of cabin crew required and the nature of the cabin crew emergency procedures are just some of the issues that need to be addressed. Other more radical concepts such as blended wing body (BWB) design, involving one or two decks with possibly four or more aisles offer even greater challenges. Can the largest exits currently available cope with passenger flow arising from four or five aisles? Do we need to consider new concepts in exit design? Should the main aisles be made wider to accommodate more passengers? In this paper we demonstrate how computer based evacuation models can be used to investigate these issues through examination of staircase evacuation procedures for VLTA and aisle/exit configuration for BWB cabin layouts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, the implementation of safer and more rigorous certification criteria, in cabin crew training and post-mortem accident investigation. As the risk of personal injury and the costs involved in performing full-scale certification trials are high, the development and use of these evacuation modelling tools are essential. Furthermore, evacuation models provide insight into the evacuation process that is impossible to derive from a single certification trial. The airEXODUS evacuation model has been under development since 1989 with support from the UK CAA and the aviation industry. In addition to describing the capabilities of the airEXODUS evacuation model, this paper describes the findings of a recent CAA project aimed at investigating model accuracy in predicting past certification trials. Furthermore, airEXODUS is used to examine issues related to the Blended Wing Body (BWB) and Very Large Transport Aircraft (VLTA). These radical new aircraft concepts pose considerable challenges to designers, operators and certification authorities. BWB concepts involving one or two decks with possibly four or more aisles offer even greater challenges. Can the largest exits currently available cope with passenger flow arising from four or five aisles? Do we need to consider new concepts in exit design? Should the main aisle be made wider to accommodate more passengers? In this paper we discuss various issues evacuation related issues associated VLTA and BWB aircraft and demonstrate how computer based evacuation models can be used to investigage these issues through examination of aisle/exit configurations for BWB cabin layouts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a couple mechanical-acoustic system of equations is solved to determine the relationship between emitted sound and damage mechanisims in paper under controlled stress conditions. The simple classical expression describing the frequency of a plucked string to its material properties is used to generate a numberical representation of the microscopic structue of the paper, and the resulting numerical model is then used to simulate the vibration of a range of simple fibre structures when undergoing two distinct types of damange mechanisms: (a)fibre/fibre bond failure, (b) fibre failure. The numercial results are analysed to determine whether there is any detectable systematic difference between the resulting acoustic emissions of the two damage processes. Fourier techniques are then used to compare th computeed results against experimental measurements. Distinct frequency components identifying each type of damage are shown to exist, and in this respect theory and experiments show good correspondece. Hence, it is shown, that althrough the mathematical model represents a grossly-simplified view of the complex structure of the paper, it nevertheless provides a good understanding of the underlying micro-mechanisms characterising its proeperties as a stress-resisting structure. Use of the model and acoompanying software will enable operators to identify approaching failure conditions in the continuous production of paper from emitted sound signals and take preventative action.