999 resultados para Traffic Forecasting


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web servers are usually located in a well-organized data center where these servers connect with the outside Internet directly through backbones. Meanwhile, the application-layer distributed denials of service (AL-DDoS) attacks are critical threats to the Internet, particularly to those business web servers. Currently, there are some methods designed to handle the AL-DDoS attacks, but most of them cannot be used in heavy backbones. In this paper, we propose a new method to detect AL-DDoS attacks. Our work distinguishes itself from previous methods by considering AL-DDoS attack detection in heavy backbone traffic. Besides, the detection of AL-DDoS attacks is easily misled by flash crowd traffic. In order to overcome this problem, our proposed method constructs a Real-time Frequency Vector (RFV) and real-timely characterizes the traffic as a set of models. By examining the entropy of AL-DDoS attacks and flash crowds, these models can be used to recognize the real AL-DDoS attacks. We integrate the above detection principles into a modularized defense architecture, which consists of a head-end sensor, a detection module and a traffic filter. With a swift AL-DDoS detection speed, the filter is capable of letting the legitimate requests through but the attack traffic is stopped. In the experiment, we adopt certain episodes of real traffic from Sina and Taobao to evaluate our AL-DDoS detection method and architecture. Compared with previous methods, the results show that our approach is very effective in defending AL-DDoS attacks at backbones. © 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims at optimally adjusting a set of green times for traffic lights in a single intersection with the purpose of minimizing travel delay time and traffic congestion. Neural network (NN) and fuzzy logic system (FLS) are two methods applied to develop intelligent traffic timing controller. For this purpose, an intersection is considered and simulated as an intelligent agent that learns how to set green times in each cycle based on the traffic information. The training approach and data for both these learning methods are similar. Both methods use genetic algorithm to tune their parameters during learning. Finally, The performance of the two intelligent learning methods is compared with the performance of simple fixed-time method. Simulation results indicate that both intelligent methods significantly reduce the total delay in the network compared to the fixed-time method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

  This paper aims at optimally adjusting a set of green times for traffic lights in a single intersection with the purpose of minimizing travel delay time and traffic congestion. Fuzzy logic system (FLS) is the method applied to develop the intelligent traffic timing controller. For this purpose, an intersection is considered and simulated as an intelligent agent that learns how to set green times in each cycle based on the traffic information. The FLS controller (FLC) uses genetic algorithm to tune its parameters during learning phase. Finally, The performance of the intelligent FLC is compared with the performance of a FLC with predefined parameters and three simple fixed-time controller. Simulation results indicate that intelligent FLC significantly reduces the total delay in the network compared to the fixed-time method and FLC with manual parameter setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistics-based Internet traffic classification using machine learning techniques has attracted extensive research interest lately, because of the increasing ineffectiveness of traditional port-based and payload-based approaches. In particular, unsupervised learning, that is, traffic clustering, is very important in real-life applications, where labeled training data are difficult to obtain and new patterns keep emerging. Although previous studies have applied some classic clustering algorithms such as K-Means and EM for the task, the quality of resultant traffic clusters was far from satisfactory. In order to improve the accuracy of traffic clustering, we propose a constrained clustering scheme that makes decisions with consideration of some background information in addition to the observed traffic statistics. Specifically, we make use of equivalence set constraints indicating that particular sets of flows are using the same application layer protocols, which can be efficiently inferred from packet headers according to the background knowledge of TCP/IP networking. We model the observed data and constraints using Gaussian mixture density and adapt an approximate algorithm for the maximum likelihood estimation of model parameters. Moreover, we study the effects of unsupervised feature discretization on traffic clustering by using a fundamental binning method. A number of real-world Internet traffic traces have been used in our evaluation, and the results show that the proposed approach not only improves the quality of traffic clusters in terms of overall accuracy and per-class metrics, but also speeds up the convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to alleviate the traffic congestion and reduce the complexity of traffic control and management, it is necessary to exploit traffic sub-areas division which should be effective in planing traffic. Some researchers applied the K-Means algorithm to divide traffic sub-areas on the taxi trajectories. However, the traditional K-Means algorithms faced difficulties in processing large-scale Global Position System(GPS) trajectories of taxicabs with the restrictions of memory, I/O, computing performance. This paper proposes a Parallel Traffic Sub-Areas Division(PTSD) method which consists of two stages, on the basis of the Parallel K-Means(PKM) algorithm. During the first stage, we develop a process to cluster traffic sub-areas based on the PKM algorithm. Then, the second stage, we identify boundary of traffic sub-areas on the base of cluster result. According to this method, we divide traffic sub-areas of Beijing on the real-word (GPS) trajectories of taxicabs. The experiment and discussion show that the method is effective in dividing traffic sub-areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the arrival of big data era, the Internet traffic is growing exponentially. A wide variety of applications arise on the Internet and traffic classification is introduced to help people manage the massive applications on the Internet for security monitoring and quality of service purposes. A large number of Machine Learning (ML) algorithms are introduced to deal with traffic classification. A significant challenge to the classification performance comes from imbalanced distribution of data in traffic classification system. In this paper, we proposed an Optimised Distance-based Nearest Neighbor (ODNN), which has the capability of improving the classification performance of imbalanced traffic data. We analyzed the proposed ODNN approach and its performance benefit from both theoretical and empirical perspectives. A large number of experiments were implemented on the real-world traffic dataset. The results show that the performance of “small classes” can be improved significantly even only with small number of training data and the performance of “large classes” remains stable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet traffic classification is a critical and essential functionality for network management and security systems. Due to the limitations of traditional port-based and payload-based classification approaches, the past several years have seen extensive research on utilizing machine learning techniques to classify Internet traffic based on packet and flow level characteristics. For the purpose of learning from unlabeled traffic data, some classic clustering methods have been applied in previous studies but the reported accuracy results are unsatisfactory. In this paper, we propose a semi-supervised approach for accurate Internet traffic clustering, which is motivated by the observation of widely existing partial equivalence relationships among Internet traffic flows. In particular, we formulate the problem using a Gaussian Mixture Model (GMM) with set-based equivalence constraint and propose a constrained Expectation Maximization (EM) algorithm for clustering. Experiments with real-world packet traces show that the proposed approach can significantly improve the quality of resultant traffic clusters. © 2014 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The value of accurate weather forecast information is substantial. In this paper we examine competition among forecast providers and its implications for the quality of forecasts. A simple economic model shows that an economic bias geographical inequality in forecast accuracy arises due to the extent of the market. Using the unique data on daily high temperature forecasts for 704 U.S. cities, we find that forecast accuracy increases with population and income. Furthermore, the economic bias gets larger when the day of forecasting is closer to the target day; i.e. when people are more concerned about the quality of forecasts. The results hold even after we control for location-specific heterogeneity and difficulty of forecasting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urban traffic as one of the most important challenges in modern city life needs practically effective and efficient solutions. Artificial intelligence methods have gained popularity for optimal traffic light control. In this paper, a review of most important works in the field of controlling traffic signal timing, in particular studies focusing on Q-learning, neural network, and fuzzy logic system are presented. As per existing literature, the intelligent methods show a higher performance compared to traditional controlling methods. However, a study that compares the performance of different learning methods is not published yet. In this paper, the aforementioned computational intelligence methods and a fixed-time method are implemented to set signals times and minimize total delays for an isolated intersection. These methods are developed and compared on a same platform. The intersection is treated as an intelligent agent that learns to propose an appropriate green time for each phase. The appropriate green time for all the intelligent controllers are estimated based on the received traffic information. A comprehensive comparison is made between the performance of Q-learning, neural network, and fuzzy logic system controller for two different scenarios. The three intelligent learning controllers present close performances with multiple replication orders in two scenarios. On average Q-learning has 66%, neural network 71%, and fuzzy logic has 74% higher performance compared to the fixed-time controller.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic congestion in urban roads is one of the biggest challenges of 21 century. Despite a myriad of research work in the last two decades, optimization of traffic signals in network level is still an open research problem. This paper for the first time employs advanced cuckoo search optimization algorithm for optimally tuning parameters of intelligent controllers. Neural Network (NN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) are two intelligent controllers implemented in this study. For the sake of comparison, we also implement Q-learning and fixed-time controllers as benchmarks. Comprehensive simulation scenarios are designed and executed for a traffic network composed of nine four-way intersections. Obtained results for a few scenarios demonstrate the optimality of trained intelligent controllers using the cuckoo search method. The average performance of NN, ANFIS, and Q-learning controllers against the fixed-time controller are 44%, 39%, and 35%, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this research is to examine the efficiency of different aggregation algorithms to the forecasts obtained from individual neural network (NN) models in an ensemble. In this study an ensemble of 100 NN models are constructed with a heterogeneous architecture. The outputs from NN models are combined by three different aggregation algorithms. These aggregation algorithms comprise of a simple average, trimmed mean, and a Bayesian model averaging. These methods are utilized with certain modifications and are employed on the forecasts obtained from all individual NN models. The output of the aggregation algorithms is analyzed and compared with the individual NN models used in NN ensemble and with a Naive approach. Thirty-minutes interval electricity demand data from Australian Energy Market Operator (AEMO) and the New York Independent System Operator's web site (NYISO) are used in the empirical analysis. It is observed that the aggregation algorithm perform better than many of the individual NN models. In comparison with the Naive approach, the aggregation algorithms exhibit somewhat better forecasting performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bulk of existing work on the statistical forecasting of air quality is based on either neural networks or linear regressions, which are both subject to important drawbacks. In particular, while neural networks are complicated and prone to in-sample overfitting, linear regressions are highly dependent on the specification of the regression function. The present paper shows how combining linear regression forecasts can be used to circumvent all of these problems. The usefulness of the proposed combination approach is verified using both Monte Carlo simulation and an extensive application to air quality in Bogota, one of the largest and most polluted cities in Latin America. © 2014 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road-killed animals are easy and inexpensive to survey, and may provide information about species distributions, abundances, and mortality rates. As with any sampling method, however, we need to explore methodological biases in such data. First, how does an animal's behavior (e.g., use of the center vs. periphery of the road) influence its vulnerability to vehicular traffic? Second, how rapidly do post-mortem processes (scavenging by other animals, destruction or displacement by subsequent vehicles) change the numbers and locations of roadkills? Our surveys of anurans on a highway in tropical Australia show that different anuran species are distributed in different ways across the width of the road, and that locations of live versus dead animals sometimes differ within a species. Experimental trials show that location on the road affects the probability of being hit by a vehicle, with anurans in the middle of the road begin hit 35% more often than anurans on the edges; thus, center-using species are more likely to be hit than edge-using taxa. The magnitude of post-mortem displacement and destruction by subsequent vehicles depended on anuran species and body size. The mean parallel displacement distance was 122.7 cm, and carcasses of thin-skinned species exhibited greater post-mortem destruction. Scavenging raptors removed 73% of carcasses, most within a few hours of sunrise. Removal rates were biased with respect to size and species. Overall, our studies suggest that investigators should carefully evaluate potential biases before using roadkill counts to estimate underlying animal abundances or mortality rates.