35 resultados para Intrusion detection

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A good intrusion system gives an accurate and efficient classification results. This ability is an essential functionality to build an intrusion detection system. In this paper, we focused on using various training functions with feature selection to achieve high accurate results. The data we used in our experiments are NSL-KDD. However, the training and testing time to build the model is very high. To address this, we proposed feature selection based on information gain, which can detect several attack types with high accurate result and low false rate. Moreover, we executed experiments to category each of the five classes (probe, denial of service (DoS), user to super-user (U2R), and remote to local (R2L), normal). Our proposed outperform other state-of-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intrusion detection system is one of the security defense tools for computer networks. In recent years this research has lacked in direction and focus. In this paper we present a survey on the recent progression of multiagent intrusion detection systems. We survey the existing types, techniques and architectures of Intrusion Detection Systems in the literature. Finally we outline the present research challenges and issues

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Findings: After evaluating the new system, a better result was generated in line with detection efficiency and the false alarm rate. This demonstrates the value of direct response action in an intrusion detection system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Anomaly detection as a kind of intrusion detection is good at detecting the unknown attacks or new attacks, and it has attracted much attention during recent years. In this paper, a new hierarchy anomaly intrusion detection model that combines the fuzzy c-means (FCM) based on genetic algorithm and SVM is proposed. During the process of detecting intrusion, the membership function and the fuzzy interval are applied to it, and the process is extended to soft classification from the previous hard classification. Then a fuzzy error correction sub interval is introduced, so when the detection result of a data instance belongs to this range, the data will be re-detected in order to improve the effectiveness of intrusion detection. Experimental results show that the proposed model can effectively detect the vast majority of network attack types, which provides a feasible solution for solving the problems of false alarm rate and detection rate in anomaly intrusion detection model.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we present some practical experience on implementing an alert fusion mechanism from our project. After investigation on most of the existing alert fusion systems, we found the current body of work alternatively weighed down in the mire of insecure design or rarely deployed because of their complexity. As confirmed by our experimental analysis, unsuitable mechanisms could easily be submerged by an abundance of useless alerts. Even with the use of methods that achieve a high fusion rate and low false positives, attack is also possible. To find the solution, we carried out analysis on a series of alerts generated by well-known datasets as well as realistic alerts from the Australian Honey-Pot. One important finding is that one alert has more than an 85% chance of being fused in the following 5 alerts. Of particular importance is our design of a novel lightweight Cache-based Alert Fusion Scheme, called CAFS. CAFS has the capacity to not only reduce the quantity of useless alerts generated by IDS (Intrusion Detection System), but also enhance the accuracy of alerts, therefore greatly reducing the cost of fusion processing. We also present reasonable and practical specifications for the target-oriented fusion policy that provides a quality guarantee on alert fusion, and as a result seamlessly satisfies the process of successive correlation. Our experimental results showed that the CAFS easily attained the desired level of survivable, inescapable alert fusion design. Furthermore, as a lightweight scheme, CAFS can easily be deployed and excel in a large amount of alert fusions, which go towards improving the usability of system resources. To the best of our knowledge, our work is a novel exploration in addressing these problems from a survivable, inescapable and deployable point of view.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we present some practical experiences on implementing an alert fusion mechanism from our project. After investigation on most of the existing alert fusion systems, we found the current body of work alternatively weighed down in the mire of insecure design or rarely deployed because of their complexity. As confirmed by our experimental analysis, unsuitable mechanisms could easily be submerged by an abundance of useless alerts. Even with the use of methods that achieve a high fusion rate and low false positives, attack is also possible. To find the solution, we carried out analysis on a series of alerts generated by well-known datasets as well as realistic alerts from the Australian Honey-Pot. One important finding is that one alert has more than an 85% chance of being fused in the following five alerts. Of particular importance is our design of a novel lightweight Cache-based Alert Fusion Scheme, called CAFS. CAFS has the capacity to not only reduce the quantity of useless alerts generated by intrusion detection system, but also enhance the accuracy of alerts, therefore greatly reducing the cost of fusion processing. We also present reasonable and practical specifications for the target-oriented fusion policy that provides a quality guarantee on alert fusion, and as a result seamlessly satisfies the process of successive correlation. Our experiments compared CAFS with traditional centralized fusion. The results showed that the CAFS easily attained the desired level of simple, counter-escapable alert fusion design. Furthermore, as a lightweight scheme, CAFS can easily be deployed and excel in a large amount of alert fusions, which go towards improving the usability of system resources. To the best of our knowledge, our work is a practical exploration in addressing problems from the academic point of view. Copyright © 2011 John Wiley & Sons, Ltd.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k−Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO – PolyKernel, SMO – Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The continuously rising Internet attacks pose severe challenges to develop an effective Intrusion Detection System (IDS) to detect known and unknown malicious attack. In order to address the problem of detecting known, unknown attacks and identify an attack grouped, the authors provide a new multi stage rules for detecting anomalies in multi-stage rules. The authors used the RIPPER for rule generation, which is capable to create rule sets more quickly and can determine the attack types with smaller numbers of rules. These rules would be efficient to apply for Signature Intrusion Detection System (SIDS) and Anomaly Intrusion Detection System (AIDS).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A hierarchical intrusion detection model is proposed to detect both anomaly and misuse attacks. In order to further speed up the training and testing, PCA-based feature extraction algorithm is used to reduce the dimensionality of the data. A PCA-based algorithm is used to filter normal data out in the upper level. The experiment results show that PCA can reduce noise in the original data set and the PCA-based algorithm can reach the desirable performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Anomaly detection techniques are used to find the presence of anomalous activities in a network by comparing traffic data activities against a "normal" baseline. Although it has several advantages which include detection of "zero-day" attacks, the question surrounding absolute definition of systems deviations from its "normal" behaviour is important to reduce the number of false positives in the system. This study proposes a novel multi-agent network-based framework known as Statistical model for Correlation and Detection (SCoDe), an anomaly detection framework that looks for timecorrelated anomalies by leveraging statistical properties of a large network, monitoring the rate of events occurrence based on their intensity. SCoDe is an instantaneous learning-based anomaly detector, practically shifting away from the conventional technique of having a training phase prior to detection. It does acquire its training using the improved extension of Exponential Weighted Moving Average (EWMA) which is proposed in this study. SCoDe does not require any previous knowledge of the network traffic, or network administrators chosen reference window as normal but effectively builds upon the statistical properties from different attributes of the network traffic, to correlate undesirable deviations in order to identify abnormal patterns. The approach is generic as it can be easily modified to fit particular types of problems, with a predefined attribute, and it is highly robust because of the proposed statistical approach. The proposed framework was targeted to detect attacks that increase the number of activities on the network server, examples which include Distributed Denial of Service (DDoS) and, flood and flash-crowd events. This paper provides a mathematical foundation for SCoDe, describing the specific implementation and testing of the approach based on a network log file generated from the cyber range simulation experiment of the industrial partner of this project.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Anomaly detection in resource constrained wireless networks is an important challenge for tasks such as intrusion detection, quality assurance and event monitoring applications. The challenge is to detect these interesting events or anomalies in a timely manner, while minimising energy consumption in the network. We propose a distributed anomaly detection architecture, which uses multiple hyperellipsoidal clusters to model the data at each sensor node, and identify global and local anomalies in the network. In particular, a novel anomaly scoring method is proposed to provide a score for each hyperellipsoidal model, based on how remote the ellipsoid is relative to their neighbours. We demonstrate using several synthetic and real datasets that our proposed scheme achieves a higher detection performance with a significant reduction in communication overhead in the network compared to centralised and existing schemes. © 2014 Elsevier Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently high-speed networks have been utilized by attackers as Distributed Denial of Service (DDoS) attack infrastructure. Services on high-speed networks also have been attacked by successive waves of the DDoS attacks. How to sensitively and accurately detect the attack traffic, and quickly filter out the attack packets are still the major challenges in DDoS defense. Unfortunately most current defense approaches can not efficiently fulfill these tasks. Our approach is to find the network anomalies by using neural network and classify DDoS packets by a Bloom filter-based classifier (BFC). BFC is a set of spaceefficient data structures and algorithms for packet classification. The evaluation results show that the simple complexity, high classification speed and accuracy and low storage requirements of this classifier make it not only suitable for DDoS filtering in high-speed networks, but also suitable for other applications such as string matching for intrusion detection systems and IP lookup for programmable routers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last couple of months a large number of distributed denial of service (DDoS) attacks have occurred across the world, especially targeting those who provide Web services. IP traceback, a counter measure against DDoS, is the ability to trace IP packets back to the true source/s of the attack. In this paper, an IP traceback scheme using a machine learning technique called intelligent decision prototype (IDP), is proposed. IDP can be used on both probabilistic packet marking (PPM) and deterministic packet marking (DPM) traceback schemes to identify DDoS attacks. This will greatly reduce the packets that are marked and in effect make the system more efficient and effective at tracing the source of an attack compared with other methods. IDP can be applied to many security systems such as data mining, forensic analysis, intrusion detection systems (IDS) and DDoS defense systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes to address the need for more innovation in organisational information security by adding a security requirement engineering focus. Based on the belief that any heavyweight security requirements process in organisational security will be doomed to fail, we developed a security requirement approach with three dimensions. The use of a simple security requirements process in the first dimension has been augmented by an agile security approach. However, introducing this second dimension of agile security does provide support for, but does not necessarily stimulate, innovation. A third dimension is, therefore, needed to ensure there is a proper focus in the organisation's efforts to identify potential new innovations in their security. To create this focus three common shortcomings in organisational information security have been identified. The resulting security approach that addresses these shortcomings is called Ubiquitous Information Security. This paper will demonstrate the potential of this new approach by briefly discussing its possible application in two areas: Ubiquitous Identity Management and Ubiquitous Wireless Security.