875 resultados para Intrusion Detection Systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

To maintain the pace of development set by Moore's law, production processes in semiconductor manufacturing are becoming more and more complex. The development of efficient and interpretable anomaly detection systems is fundamental to keeping production costs low. As the dimension of process monitoring data can become extremely high anomaly detection systems are impacted by the curse of dimensionality, hence dimensionality reduction plays an important role. Classical dimensionality reduction approaches, such as Principal Component Analysis, generally involve transformations that seek to maximize the explained variance. In datasets with several clusters of correlated variables the contributions of isolated variables to explained variance may be insignificant, with the result that they may not be included in the reduced data representation. It is then not possible to detect an anomaly if it is only reflected in such isolated variables. In this paper we present a new dimensionality reduction technique that takes account of such isolated variables and demonstrate how it can be used to build an interpretable and robust anomaly detection system for Optical Emission Spectroscopy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Network intrusion detection sensors are usually built around low level models of network traffic. This means that their output is of a similarly low level and as a consequence, is difficult to analyze. Intrusion alert correlation is the task of automating some of this analysis by grouping related alerts together. Attack graphs provide an intuitive model for such analysis. Unfortunately alert flooding attacks can still cause a loss of service on sensors, and when performing attack graph correlation, there can be a large number of extraneous alerts included in the output graph. This obscures the fine structure of genuine attacks and makes them more difficult for human operators to discern. This paper explores modified correlation algorithms which attempt to minimize the impact of this attack.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The major function of this model is to access the UCI Wisconsin Breast Cancer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classification can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artificial Immune Systems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to problem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifically for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based modelling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

libtissue is a software system for implementing and testing AIS algorithms on real-world computer security problems. AIS algorithms are implemented as a collection of cells, antigen and signals interacting within a tissue compartment. Input data to the tissue comes in the form of realtime events generated by sensors monitoring a system under surveillance, and cells are actively able to affect the monitored system through response mechanisms. libtissue is being used by researchers on a project at the University of Nottingham to explore the application of a range of immune-inspired algorithms to problems in intrusion detection. This talk describes the architecture and design of libtissue, along with the implementation of a simple algorithm and its application to a computer security problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new ‘Danger Theory’ (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of ‘grounding’ the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new emerging paradigm of Uncertain Risk of Suspicion, Threat and Danger, observed across the field of information security, is described. Based on this paradigm a novel approach to anomaly detection is presented. Our approach is based on a simple yet powerful analogy from the innate part of the human immune system, the Toll-Like Receptors. We argue that such receptors incorporated as part of an anomaly detector enhance the detector’s ability to distinguish normal and anomalous behaviour. In addition we propose that Toll-Like Receptors enable the classification of detected anomalies based on the types of attacks that perpetrate the anomalous behaviour. Classification of such type is either missing in existing literature or is not fit for the purpose of reducing the burden of an administrator of an intrusion detection system. For our model to work, we propose the creation of a taxonomy of the digital Acytota, based on which our receptors are created.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biologically-inspired methods such as evolutionary algorithms and neural networks are proving useful in the field of information fusion. Artificial immune systems (AISs) are a biologically-inspired approach which take inspiration from the biological immune system. Interestingly, recent research has shown how AISs which use multi-level information sources as input data can be used to build effective algorithms for realtime computer intrusion detection. This research is based on biological information fusion mechanisms used by the human immune system and as such might be of interest to the information fusion community. The aim of this paper is to present a summary of some of the biological information fusion mechanisms seen in the human immune system, and of how these mechanisms have been implemented as AISs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chromatography combined with several different detection systems is one of the more used and better performing analytical tools. Chromatography with tandem mass spectrometric detection gives highly selective and sensitive analyses and permits obtaining structural information about the analites and about their molar masses. Due to these characteristics, this analytical technique is very efficient when used to detect substances at trace levels in complex matrices. In this paper we review instrumental and technical aspects of chromatography-tandem mass spectrometry and the state of the art of the technique as it is applied to analysis of toxic substances in food.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A norfloxacina e o trimetoprim são dois antibióticos antibacterianos usados para o tratamento de infeções urinárias, intestinais e respiratórias. A maioria dos fármacos exige uma dosagem que garanta os níveis de segurança e eficácia de atuação. A necessidade de dosear os medicamentos e os seus metabolitos é assim um controlo imperioso e em muitos casos regular no tratamento de um paciente. Neste trabalho desenvolveram-se dois sensores eletroquímicos para a deteção da norfloxacina (NFX) e do trimetoprim (TMP), usando como superfície de suporte o carbono vítreo. A busca de novos materiais que conferiram maior seletividade e sensibilidade aos sistemas de deteção e por outro lado apresentem menores riscos para o paciente quando usados em dispositivos que permitam uma análise point-of-care, é especialmente importante e pode ser uma parte crucial do processo de decisão clínica. Assim, os polímeros molecularmente impresos enquadram-se nesse perfil e o seu uso tem vindo a ser cada vez mais avaliado. A impressão molecular é uma tecnologia capaz de produzir polímeros que incorporam as moléculas do analito e que após remoção por solventes específicos, permitem dotá-los de locais específicos de reconhecimento estereoquímico. A seleção do pirrol como polímero molecularmente impresso (MIP) permitiu construir com sucesso os sensores para doseamento dos antibióticos. A fim de aumentar a sensibilidade do método incorporou-se grafeno na superfície do elétrodo. Este material tem vindo a ser largamente utilizado devido às suas propriedades: estrutura molecular, condutividade elétrica e aumento da superfície são algumas das características que mais despertam o interesse para a sua aplicação neste projeto. Os sensores desenvolvidos foram incorporados em sistemas eletroquímicos. Os métodos voltamétricos aplicados foram a voltametria cíclica, a voltametria de onda quadrada e ainda a impedância. As condições de análise foram otimizadas no que respeita à polimerização do pirrol (concentração do polímero, número de ciclos de eletropolimerização e respetivos potenciais aplicados, tempo de incubação, solvente de remoção do analito), ao pH da solução do fármaco, à gama de concentrações dos antibióticos e aos parâmetros voltamétricos dos métodos de análise. Para cada um dos antibióticos um elétrodo não-impresso foi também preparado, usando o procedimento de polimerização mas sem a presença da molécula do analito, e foi usado como controlo. O sensor desenvolvido para o trimetoprim foi usado no doseamento do fármaco em amostras de urina. As amostras foram analisadas sem qualquer diluição, apenas foram centrifugadas para remoção de proteínas e algum interferente. Os sensores construídos apresentaram comportamento linear na gama de concentrações entre 102 e 107 mol/L. Os resultados mostram boa precisão (desvios padrão inferiores a 11%) e os limites de deteção foram de 8,317 e 1,307 mol/L para a norfloxacina e o trimetoprim, respetivamente. Para validação do método foram ainda efetuados ensaios de recuperação tendo obtido valores superiores a 94%.