997 resultados para Traffic Breakdown


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A LIBS setup was built in the Institute of Modern Physics. In our experiments, LIBS spectra produced by infrared radiation of Nd : YAG nanosecond laser with 100 and 150 mJ pulse energy, respectively, were measured by fiber optic spectrometer in the ranges of 230-430 run and 430-1080 nm with a delay time of 1.7 and gate width of 2 ms for potato and lily samples prepared by vacuum freeze-dried technique. The lines from different metal elements such as K, Ca, Na, Mg, Fe, Al, Mn and Ti, and nonmetal elements such as C, N, O and H, and some molecular spectra from C-2, CaO, and CN were identified according to their wavelengths. The relative content of the six microelements, Ca, Na, K, Fe, Al, and Mg in the samples were analyzed according to their representative line intensities. By comparison we found that there are higher relative content of Ca and Na in lily samples and higher relative content of Mg in potato samples. The experimental results showed that LIBS technique is a fast and effective means for measuring and comparing the contents of microelements in plant samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The RFQ cooler and buncher RFQ1L is one of the key parts of the being-built super-heavy nuclide research spectrometer. In order to understand the high-voltage breakdown phenomenon, the voltages between electrodes have been measured. In addition, more extensive simulations have been performed for better understanding and optimizing the RFQ1L work points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

利用Nd:YAG激光器输出的532nm激光束对位于空气中的标准变形铝合金样品进行烧蚀产生了激光诱导等离子体.对测量的230—440nm波长范围的光谱进行了谱线标定,同时基于自由定标方法对样品成分进行了定量分析,确定了样品中的元素含量.分析结果与标准值具有较好的一致性.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laser-induced breakdown plasma is produced by using Q-switched Nd: YAG laser operating at 532 nm, which interacts with the Al alloy sample target in air. The spectral lines in the 230-440 nm wavelength range have been identified, and based on the calibration-free method, the mass concentration of Al alloy are obtained, which is in good agreement with the standard value of the sample.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the capabilities of laser-induced break down spectroscopy (LIBS) for rapid analysis to multi-component plant are illustrated using a 1064 nm laser focused onto the surface of folium lycii. Based on homogeneous plasma assumption, nine of essential micronutrients in folium lycii are identified. Using Saha equation and Boltzmann plot method electron density and plasma temperature are obtained, and the irrelative concentration (Ca, Mg, Al, Si, Ti, Na, K, Li, and Sr) are obtained employing a semi-quantitative method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laser-induced breakdown spectroscopy (LIBS) as a powerful analytical technique is applied to analyze trace-elements in fresh plant samples. We investigate the LIBS spectra of fresh holly leaves and observe more than 430 lines emitted from 25 elements and molecules in the region 230-438 nm. The influence of laser wavelength on LIBS applied to semi-quantitative analysis of trace-element contents in plant samples is studied. The results show that the UV laser has lower relative standard deviations and better repeatability for semi-quantitative analysis of trace-element contents in plant samples. This work may be helpful for improving the quantitative analysis power of LIBS applied to plant samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natl Chiao Tung Univ, Dept Comp Sci

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network traffic arises from the superposition of Origin-Destination (OD) flows. Hence, a thorough understanding of OD flows is essential for modeling network traffic, and for addressing a wide variety of problems including traffic engineering, traffic matrix estimation, capacity planning, forecasting and anomaly detection. However, to date, OD flows have not been closely studied, and there is very little known about their properties. We present the first analysis of complete sets of OD flow timeseries, taken from two different backbone networks (Abilene and Sprint-Europe). Using Principal Component Analysis (PCA), we find that the set of OD flows has small intrinsic dimension. In fact, even in a network with over a hundred OD flows, these flows can be accurately modeled in time using a small number (10 or less) of independent components or dimensions. We also show how to use PCA to systematically decompose the structure of OD flow timeseries into three main constituents: common periodic trends, short-lived bursts, and noise. We provide insight into how the various constituents contribute to the overall structure of OD flows and explore the extent to which this decomposition varies over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet Traffic Managers (ITMs) are special machines placed at strategic places in the Internet. itmBench is an interface that allows users (e.g. network managers, service providers, or experimental researchers) to register different traffic control functionalities to run on one ITM or an overlay of ITMs. Thus itmBench offers a tool that is extensible and powerful yet easy to maintain. ITM traffic control applications could be developed either using a kernel API so they run in kernel space, or using a user-space API so they run in user space. We demonstrate the flexibility of itmBench by showing the implementation of both a kernel module that provides a differentiated network service, and a user-space module that provides an overlay routing service. Our itmBench Linux-based prototype is free software and can be obtained from http://www.cs.bu.edu/groups/itm/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study a two-step approach for inferring network traffic demands. First we elaborate and evaluate a modeling approach for generating good starting points to be fed to iterative statistical inference techniques. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we evaluate and compare alternative mechanisms for generating starting points and the convergence characteristics of our EM algorithm against a recently proposed Weighted Least Squares approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.