875 resultados para Intrusion Detection Systems
Resumo:
A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.
Resumo:
Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.
Resumo:
In this paper we present a component based person detection system that is capable of detecting frontal, rear and near side views of people, and partially occluded persons in cluttered scenes. The framework that is described here for people is easily applied to other objects as well. The motivation for developing a component based approach is two fold: first, to enhance the performance of person detection systems on frontal and rear views of people and second, to develop a framework that directly addresses the problem of detecting people who are partially occluded or whose body parts blend in with the background. The data classification is handled by several support vector machine classifiers arranged in two layers. This architecture is known as Adaptive Combination of Classifiers (ACC). The system performs very well and is capable of detecting people even when all components of a person are not found. The performance of the system is significantly better than a full body person detector designed along similar lines. This suggests that the improved performance is due to the components based approach and the ACC data classification structure.
Resumo:
An approach to the automatic generation of efficient Field Programmable Gate Arrays (FPGAs) circuits for the Regular Expression-based (RegEx) Pattern Matching problems is presented. Using a novel design strategy, as proposed, circuits that are highly area-and-time-efficient can be automatically generated for arbitrary sets of regular expressions. This makes the technique suitable for applications that must handle very large sets of patterns at high speed, such as in the network security and intrusion detection application domains. We have combined several existing techniques to optimise our solution for such domains and proposed the way the whole process of dynamic generation of FPGAs for RegEX pattern matching could be automated efficiently.
Resumo:
The major technical objectives of the RC-NSPES are to provide a framework for the concurrent operation of reactive and pro-active security functions to deliver efficient and optimised intrusion detection schemes as well as enhanced and highly correlated rule sets for more effective alerts management and root-cause analysis. The design and implementation of the RC-NSPES solution includes a number of innovative features in terms of real-time programmable embedded hardware (FPGA) deployment as well as in the integrated management station. These have been devised so as to deliver enhanced detection of attacks and contextualised alerts against threats that can arise from both the network layer and the application layer protocols. The resulting architecture represents an efficient and effective framework for the future deployment of network security systems.
Resumo:
Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes. © 2009 SPIE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Das in dieser Arbeit vorgestellte Experiment zur Messung des magnetischen Moments des Protons basiert auf der Messung des Verhältnisses von Zyklotronfrequenz und Larmorfrequenz eines einzelnen, in einer kryogenen Doppel-Penning Falle gespeicherten Protons. In dieser Arbeit konnten erstmalig zwei der drei Bewegungsfrequenzen des Protons gleichzeitig im thermischen Gleichgewicht mit entsprechenden hochsensitiven Nachweissystemen nicht-destruktiv detektiert werden, wodurch die Messzeit zur Bestimmung der Zyklotronfrequenz halbiert werden konnte. Ferner wurden im Rahmen dieser Arbeit erstmalig einzelne Spin-Übergänge eines einzelnen Protons detektiert, wodurch die Bestimmung der Larmorfrequenz ermöglicht wird. Mithilfe des kontinuierlichen Stern-Gerlach Effekts wird durch eine sogenannte magnetische Flasche das magnetische Moment an die axiale Bewegungsmode des Protons gekoppelt. Eine Änderung des Spinzustands verursacht folglich einen Frequenzsprung der axialen Bewegungsfrequenz, welche nicht-destruktiv gemessen werden kann. Erschwert wird die Detektion des Spinzustands dadurch, dass die axiale Frequenz nicht nur vom Spinmoment, sondern auch vom Bahnmoment abhängt. Die große experimentelle Herausforderung besteht also in der Verhinderung von Energieschwankungen in den radialen Bewegungsmoden, um die Detektierbarkeit von Spin-Übergängen zu gewährleisten. Durch systematische Studien zur Stabilität der axialen Frequenz sowie einer kompletten Überarbeitung des experimentellen Aufbaus, konnte dieses Ziel erreicht werden. Erstmalig kann der Spinzustand eines einzelnen Protons mit hoher Zuverlässigkeit bestimmt werden. Somit stellt diese Arbeit einen entscheidenden Schritt auf dem Weg zu einer hochpräzisen Messung des magnetischen Moments des Protons dar.
Resumo:
The presented work proposes a new approach for anomaly detection. This approach is based on changes in a population of evolving agents under stress. If conditions are appropriate, changes in the population (modeled by the bioindicators) are representative of the alterations to the environment. This approach, based on an ecological view, improves functionally traditional approaches to the detection of anomalies. To verify this assertion, experiments based on Network Intrussion Detection Systems are presented. The results are compared with the behaviour of other bioinspired approaches and machine learning techniques.
Resumo:
The employment of nonlinear analysis techniques for automatic voice pathology detection systems has gained popularity due to the ability of such techniques for dealing with the underlying nonlinear phenomena. On this respect, characterization using nonlinear analysis typically employs the classical Correlation Dimension and the largest Lyapunov Exponent, as well as some regularity quantifiers computing the system predictability. Mostly, regularity features highly depend on a correct choosing of some parameters. One of those, the delay time �, is usually fixed to be 1. Nonetheless, it has been stated that a unity � can not avoid linear correlation of the time series and hence, may not correctly capture system nonlinearities. Therefore, present work studies the influence of the � parameter on the estimation of regularity features. Three � estimations are considered: the baseline value 1; a � based on the Average Automutual Information criterion; and � chosen from the embedding window. Testing results obtained for pathological voice suggest that an improved accuracy might be obtained by using a � value different from 1, as it accounts for the underlying nonlinearities of the voice signal.
Resumo:
The aim of automatic pathological voice detection systems is to serve as tools, to medical specialists, for a more objective, less invasive and improved diagnosis of diseases. In this respect, the gold standard for those system include the usage of a optimized representation of the spectral envelope, either based on cepstral coefficients from the mel-scaled Fourier spectral envelope (Mel-Frequency Cepstral Coefficients) or from an all-pole estimation (Linear Prediction Coding Cepstral Coefficients) forcharacterization, and Gaussian Mixture Models for posterior classification. However, the study of recently proposed GMM-based classifiers as well as Nuisance mitigation techniques, such as those employed in speaker recognition, has not been widely considered inpathology detection labours. The present work aims at testing whether or not the employment of such speaker recognition tools might contribute to improve system performance in pathology detection systems, specifically in the automatic detection of Obstructive Sleep Apnea. The testing procedure employs an Obstructive Sleep Apnea database, in conjunction with GMM-based classifiers looking for a better performance. The results show that an improved performance might be obtained by using such approach.
Resumo:
The growing need for fast sampling of explosives in high throughput areas has increased the demand for improved technology for the trace detection of illicit compounds. Detection of the volatiles associated with the presence of the illicit compounds offer a different approach for sensitive trace detection of these compounds without increasing the false positive alarm rate. This study evaluated the performance of non-contact sampling and detection systems using statistical analysis through the construction of Receiver Operating Characteristic (ROC) curves in real-world scenarios for the detection of volatiles in the headspace of smokeless powder, used as the model system for generalizing explosives detection. A novel sorbent coated disk coined planar solid phase microextraction (PSPME) was previously used for rapid, non-contact sampling of the headspace containers. The limits of detection for the PSPME coupled to IMS detection was determined to be 0.5-24 ng for vapor sampling of volatile chemical compounds associated with illicit compounds and demonstrated an extraction efficiency of three times greater than other commercially available substrates, retaining >50% of the analyte after 30 minutes sampling of an analyte spike in comparison to a non-detect for the unmodified filters. Both static and dynamic PSPME sampling was used coupled with two ion mobility spectrometer (IMS) detection systems in which 10-500 mg quantities of smokeless powders were detected within 5-10 minutes of static sampling and 1 minute of dynamic sampling time in 1-45 L closed systems, resulting in faster sampling and analysis times in comparison to conventional solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) analysis. Similar real-world scenarios were sampled in low and high clutter environments with zero false positive rates. Excellent PSPME-IMS detection of the volatile analytes were visualized from the ROC curves, resulting with areas under the curves (AUC) of 0.85-1.0 and 0.81-1.0 for portable and bench-top IMS systems, respectively. Construction of ROC curves were also developed for SPME-GC-MS resulting with AUC of 0.95-1.0, comparable with PSPME-IMS detection. The PSPME-IMS technique provides less false positive results for non-contact vapor sampling, cutting the cost and providing an effective sampling and detection needed in high-throughput scenarios, resulting in similar performance in comparison to well-established techniques with the added advantage of fast detection in the field.
Resumo:
The sudden hydrocarbon influx from the formation into the wellbore poses a serious risk to the safety of the well. This sudden influx is termed a kick, which, if not controlled, may lead to a blowout. Therefore, early detection of the kick is crucial to minimize the possibility of a blowout occurrence. There is a high probability of delay in kick detection, apart from other issues when using a kick detection system that is exclusively based on surface monitoring. Down-hole monitoring techniques have a potential to detect a kick at its early stage. Down-hole monitoring could be particularly beneficial when the influx occurs as a result of a lost circulation scenario. In a lost circulation scenario, when the down-hole pressure becomes lower than the formation pore pressure, the formation fluid may starts to enter the wellbore. The lost volume of the drilling fluid is compensated by the formation fluid flowing into the well bore, making it difficult to identify the kick based on pit (mud tank) volume observations at the surface. This experimental study investigates the occurrence of a kick based on relative changes in the mass flow rate, pressure, density, and the conductivity of the fluid in the down-hole. Moreover, the parameters that are most sensitive to formation fluid are identified and a methodology to detect a kick without false alarms is reported. Pressure transmitter, the Coriolis flow and density meter, and the conductivity sensor are employed to observe the deteriorating well conditions in the down-hole. These observations are used to assess the occurrence of a kick and associated blowout risk. Monitoring of multiple down-hole parameters has a potential to improve the accuracy of interpretation related to kick occurrence, reduces the number of false alarms, and provides a broad picture of down-hole conditions. The down-hole monitoring techniques have a potential to reduce the kick detection period. A down-hole assembly of the laboratory scale drilling rig model and kick injection setup were designed, measuring instruments were acquired, a frame was fabricated, and the experimental set-up was assembled and tested. This set-up has the necessary features to evaluate kick events while implementing down-hole monitoring techniques. Various kick events are simulated on the drilling rig model. During the first set of experiments compressed air (which represents the formation fluid) is injected with constant pressure margin. In the second set of experiments the compressed air is injected with another pressure margin. The experiments are repeated with another pump (flow) rate as well. This thesis consists of three main parts. The first part gives the general introduction, motivation, outline of the thesis, and a brief description of influx: its causes, various leading and lagging indicators, and description of the several kick detection systems that are in practice in the industry. The second part describes the design and construction of the laboratory scale down-hole assembly of the drilling rig and kick injection setup, which is used to implement the proposed methodology for early kick detection. The third part discusses the experimental work, describes the methodology for early kick detection, and presents experimental results that show how different influx events affect the mass flow rate, pressure, conductivity, and density of the fluid in the down-hole, and the discussion of the results. The last chapter contains summary of the study and future research.