867 resultados para Intrusion Detection System (IDS)
Resumo:
Security Onion is a Network Security Manager (NSM) platform that provides multiple Intrusion Detection Systems (IDS) including Host IDS (HIDS) and Network IDS (NIDS). Many types of data can be acquired using Security Onion for analysis. This includes data related to: Host, Network, Session, Asset, Alert and Protocols. Security Onion can be implemented as a standalone deployment with server and sensor included or with a master server and multiple sensors allowing for the system to be scaled as required. Many interfaces and tools are available for management of the system and analysis of data such as Sguil, Snorby, Squert and Enterprise Log Search and Archive (ELSA). These interfaces can be used for analysis of alerts and captured events and then can be further exported for analysis in Network Forensic Analysis Tools (NFAT) such as NetworkMiner, CapME or Xplico. The Security Onion platform also provides various methods of management such as Secure SHell (SSH) for management of server and sensors and Web client remote access. All of this with the ability to replay and analyse example malicious traffic makes the Security Onion a suitable low cost alternative for Network Security Management. In this paper, we have a feature and functionality review for the Security Onion in terms of: types of data, configuration, interface, tools and system management.
Resumo:
Artificial immune systems have previously been applied to the problem of intrusion detection. The aim of this research is to develop an intrusion detection system based on the function of Dendritic Cells (DCs). DCs are antigen presenting cells and key to the activation of the human immune system, behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA). In algorithmic terms, individual DCs perform multi-sensor data fusion, asynchronously correlating the fused data signals with a secondary data stream. Aggregate output of a population of cells is analysed and forms the basis of an anomaly detection system. In this paper the DCA is applied to the detection of outgoing port scans using TCP SYN packets. Results show that detection can be achieved with the DCA, yet some false positives can be encountered when simultaneously scanning and using other network services. Suggestions are made for using adaptive signals to alleviate this uncovered problem.
Resumo:
Artificial immune systems, more specifically the negative selection algorithm, have previously been applied to intrusion detection. The aim of this research is to develop an intrusion detection system based on a novel concept in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting cells and key to the activation of the human immune system. DCs perform the vital role of combining signals from the host tissue and correlate these signals with proteins known as antigens. In algorithmic terms, individual DCs perform multi-sensor data fusion based on time-windows. The whole population of DCs asynchronously correlates the fused signals with a secondary data stream. The behaviour of human DCs is abstracted to form the DC Algorithm (DCA), which is implemented using an immune inspired framework, libtissue. This system is used to detect context switching for a basic machine learning dataset and to detect outgoing portscans in real-time. Experimental results show a significant difference between an outgoing portscan and normal traffic.
Resumo:
The work described in this Master’s Degree thesis was born after the collaboration with the company Maserati S.p.a, an Italian luxury car maker with its headquarters located in Modena, in the heart of the Italian Motor Valley, where I worked as a stagiaire in the Virtual Engineering team between September 2021 and February 2022. This work proposes the validation using real-world ECUs of a Driver Drowsiness Detection (DDD) system prototype based on different detection methods with the goal to overcome input signal losses and system failures. Detection methods of different categories have been chosen from literature and merged with the goal of utilizing the benefits of each of them, overcoming their limitations and limiting as much as possible their degree of intrusiveness to prevent any kind of driving distraction: an image processing-based technique for human physical signals detection as well as methods based on driver-vehicle interaction are used. A Driver-In-the-Loop simulator is used to gather real data on which a Machine Learning-based algorithm will be trained and validated. These data come from the tests that the company conducts in its daily activities so confidential information about the simulator and the drivers will be omitted. Although the impact of the proposed system is not remarkable and there is still work to do in all its elements, the results indicate the main advantages of the system in terms of robustness against subsystem failures and signal losses.
Resumo:
From 2010, the Proton Radius has become one of the most interest value to determine. The first proof of not complete understanding of its internal structure was the measurement of the Lamb Shift using the muonic hydrogen, leading to a value 7σ lower. A new road so was open and the Proton Radius Puzzle epoch begun. FAMU Experiment is a project that tries to give an answer to this Puzzle implementing high precision experimental apparatus. The work of this thesis is based on the study, construction and first characterization of a new detection system. Thanks to the previous experiments and simulations, this apparatus is composed by 17 detectors positioned on a semicircular crown with the related electronic circuit. The detectors' characterization is based on the use of a LabView program controlling a digital potentiometer and on other two analog potentiometers, all three used to set the amplitude of each detector to a predefined value, around 1.2 V, set on the oscilloscope by which is possible to observe the signal. This is the requirement in order to have, in the final measurement, a single high peak given by the sum of all the signals coming from the detectors. Each signal has been acquired for almost half of an hour, but the entire circuit has been maintained active for more time to observe its capacity to work for longer periods. The principal results of this thesis are given by the spectra of 12 detectors and the corresponding values of Voltages, FWHM and Resolution. The outcomes of the acquisitions show also another expected behavior: the strong dependence of the detectors from the temperature, demonstrating that an its change causes fluctuations in the signal. In turn, these fluctuations will affect the spectrum, resulting in a shifting of the curve and a lower Resolution. On the other hand, a measurement performed in stable conditions will lead to accordance between the nominal and experimental measurements, as for the detectors 10, 11 and 12 of our system.
Resumo:
This paper proposes a novel method of authentication of users in secure buildings. The main objective is to investigate whether user actions in the built environment can produce consistent behavioural signatures upon which a building intrusion detection system could be based. In the process three behavioural expressions were discovered: time-invariant, co-dependent and idiosyncratic.
Resumo:
A área de Detecção de Intrusão, apesar de muito pesquisada, não responde a alguns problemas reais como níveis de ataques, dim ensão e complexidade de redes, tolerância a falhas, autenticação e privacidade, interoperabilidade e padronização. Uma pesquisa no Instituto de Informática da UFRGS, mais especificamente no Grupo de Segurança (GSEG), visa desenvolver um Sistema de Detecção de Intrusão Distribuído e com características de tolerância a falhas. Este projeto, denominado Asgaard, é a idealização de um sistema cujo objetivo não se restringe apenas a ser mais uma ferramenta de Detecção de Intrusão, mas uma plataforma que possibilite agregar novos módulos e técnicas, sendo um avanço em relação a outros Sistemas de Detecção atualmente em desenvolvimento. Um tópico ainda não abordado neste projeto seria a detecção de sniffers na rede, vindo a ser uma forma de prevenir que um ataque prossiga em outras estações ou redes interconectadas, desde que um intruso normalmente instala um sniffer após um ataque bem sucedido. Este trabalho discute as técnicas de detecção de sniffers, seus cenários, bem como avalia o uso destas técnicas em uma rede local. As técnicas conhecidas são testadas em um ambiente com diferentes sistemas operacionais, como linux e windows, mapeando os resultados sobre a eficiência das mesmas em condições diversas.
Resumo:
Os Sistemas de Detecção e Prevenção de Intrusão (Intrusion Detection Systems – IDS e Intrusion Prevention Systems - IPS) são ferramentas bastante conhecidas e bem consagradas no mundo da segurança da informação. Porém, a falta de integração com os equipamentos de rede como switches e roteadores acaba limitando a atuação destas ferramentas e exige um bom dimensionamento de recursos de hardware como processamento, memória e interfaces de rede de alta velocidade, utilizados para implementá-las. Diante de diversas limitações deparadas por pesquisadores e administradores de redes, surgiu o conceito de Rede Definida por Software (Software Defined Network – SDN), que ao separar os planos de controle e de dados, permite adaptar o funcionamento da rede de acordo com as necessidades de cada um. Desta forma, devido à padronização e flexibilidade propostas pelas SDNs, e das limitações apresentadas dos IPSs, esta dissertação de mestrado propõe o IPSFlow, um framework que utiliza uma rede baseada na arquitetura SDN e o protocolo OpenFlow para a criação de um IPS com ampla cobertura e que permite bloquear um tráfego caracterizado pelos IDS(s) como malicioso no equipamento mais próximo da origem. Para validar o framework, experimentos no ambiente virtual Mininet foram realizados utilizando-se o Snort como IDS para analisar tráfego de varredura (scan) gerado pelo Nmap de um host ao outro. Os resultados coletados apresentam que o IPSFlow funcionou conforme planejado ao efetuar o bloqueio de 85% do tráfego de varredura.
Resumo:
Il termine cloud ha origine dal mondo delle telecomunicazioni quando i provider iniziarono ad utilizzare servizi basati su reti virtuali private (VPN) per la comunicazione dei dati. Il cloud computing ha a che fare con la computazione, il software, l’accesso ai dati e servizi di memorizzazione in modo tale che l’utente finale non abbia idea della posizione fisica dei dati e la configurazione del sistema in cui risiedono. Il cloud computing è un recente trend nel mondo IT che muove la computazione e i dati lontano dai desktop e dai pc portatili portandoli in larghi data centers. La definizione di cloud computing data dal NIST dice che il cloud computing è un modello che permette accesso di rete on-demand a un pool condiviso di risorse computazionali che può essere rapidamente utilizzato e rilasciato con sforzo di gestione ed interazione con il provider del servizio minimi. Con la proliferazione a larga scala di Internet nel mondo le applicazioni ora possono essere distribuite come servizi tramite Internet; come risultato, i costi complessivi di questi servizi vengono abbattuti. L’obbiettivo principale del cloud computing è utilizzare meglio risorse distribuite, combinarle assieme per raggiungere un throughput più elevato e risolvere problemi di computazione su larga scala. Le aziende che si appoggiano ai servizi cloud risparmiano su costi di infrastruttura e mantenimento di risorse computazionali poichè trasferiscono questo aspetto al provider; in questo modo le aziende si possono occupare esclusivamente del business di loro interesse. Mano a mano che il cloud computing diventa più popolare, vengono esposte preoccupazioni riguardo i problemi di sicurezza introdotti con l’utilizzo di questo nuovo modello. Le caratteristiche di questo nuovo modello di deployment differiscono ampiamente da quelle delle architetture tradizionali, e i meccanismi di sicurezza tradizionali risultano inefficienti o inutili. Il cloud computing offre molti benefici ma è anche più vulnerabile a minacce. Ci sono molte sfide e rischi nel cloud computing che aumentano la minaccia della compromissione dei dati. Queste preoccupazioni rendono le aziende restie dall’adoperare soluzioni di cloud computing, rallentandone la diffusione. Negli anni recenti molti sforzi sono andati nella ricerca sulla sicurezza degli ambienti cloud, sulla classificazione delle minacce e sull’analisi di rischio; purtroppo i problemi del cloud sono di vario livello e non esiste una soluzione univoca. Dopo aver presentato una breve introduzione sul cloud computing in generale, l’obiettivo di questo elaborato è quello di fornire una panoramica sulle vulnerabilità principali del modello cloud in base alle sue caratteristiche, per poi effettuare una analisi di rischio dal punto di vista del cliente riguardo l’utilizzo del cloud. In questo modo valutando i rischi e le opportunità un cliente deve decidere se adottare una soluzione di tipo cloud. Alla fine verrà presentato un framework che mira a risolvere un particolare problema, quello del traffico malevolo sulla rete cloud. L’elaborato è strutturato nel modo seguente: nel primo capitolo verrà data una panoramica del cloud computing, evidenziandone caratteristiche, architettura, modelli di servizio, modelli di deployment ed eventuali problemi riguardo il cloud. Nel secondo capitolo verrà data una introduzione alla sicurezza in ambito informatico per poi passare nello specifico alla sicurezza nel modello di cloud computing. Verranno considerate le vulnerabilità derivanti dalle tecnologie e dalle caratteristiche che enucleano il cloud, per poi passare ad una analisi dei rischi. I rischi sono di diversa natura, da quelli prettamente tecnologici a quelli derivanti da questioni legali o amministrative, fino a quelli non specifici al cloud ma che lo riguardano comunque. Per ogni rischio verranno elencati i beni afflitti in caso di attacco e verrà espresso un livello di rischio che va dal basso fino al molto alto. Ogni rischio dovrà essere messo in conto con le opportunità che l’aspetto da cui quel rischio nasce offre. Nell’ultimo capitolo verrà illustrato un framework per la protezione della rete interna del cloud, installando un Intrusion Detection System con pattern recognition e anomaly detection.
Resumo:
Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.
Resumo:
Different types of ontologies and knowledge or metaknowledge connected to them are considered and analyzed aiming at realization in contemporary information security systems (ISS) and especially the case of intrusion detection systems (IDS) or intrusion prevention systems (IPS). Human-centered methods INCONSISTENCY, FUNNEL, CALEIDOSCOPE and CROSSWORD are algorithmic or data-driven methods based on ontologies. All of them interact on a competitive principle ‘survival of the fittest’. They are controlled by a Synthetic MetaMethod SMM. It is shown that the data analysis frequently needs an act of creation especially if it is applied to knowledge-poor environments. It is shown that human-centered methods are very suitable for resolutions in case, and often they are based on the usage of dynamic ontologies
Resumo:
A new emerging paradigm of Uncertain Risk of Suspicion, Threat and Danger, observed across the field of information security, is described. Based on this paradigm a novel approach to anomaly detection is presented. Our approach is based on a simple yet powerful analogy from the innate part of the human immune system, the Toll-Like Receptors. We argue that such receptors incorporated as part of an anomaly detector enhance the detector’s ability to distinguish normal and anomalous behaviour. In addition we propose that Toll-Like Receptors enable the classification of detected anomalies based on the types of attacks that perpetrate the anomalous behaviour. Classification of such type is either missing in existing literature or is not fit for the purpose of reducing the burden of an administrator of an intrusion detection system. For our model to work, we propose the creation of a taxonomy of the digital Acytota, based on which our receptors are created.
Resumo:
The premise of automated alert correlation is to accept that false alerts from a low level intrusion detection system are inevitable and use attack models to explain the output in an understandable way. Several algorithms exist for this purpose which use attack graphs to model the ways in which attacks can be combined. These algorithms can be classified in to two broad categories namely scenario-graph approaches, which create an attack model starting from a vulnerability assessment and type-graph approaches which rely on an abstract model of the relations between attack types. Some research in to improving the efficiency of type-graph correlation has been carried out but this research has ignored the hypothesizing of missing alerts. Our work is to present a novel type-graph algorithm which unifies correlation and hypothesizing in to a single operation. Our experimental results indicate that the approach is extremely efficient in the face of intensive alerts and produces compact output graphs comparable to other techniques.
Resumo:
Elders lose independence and wellbeing, accompanied by decreased functions in terms of hearing, vision, strength and coordination abilities. These factors contribute to balance difficulties that eventually lead to falls. The injuries due to falls, at this age, are risky, since most of the times may cause a significant – and permanent – decrease of quality of life or, in extreme cases, death. In this context, a fall detection system can bring an added value to assist elderly people.This paper describes a system consisting of a wearable sensor unit, a smartphone and a website. When the sensor detects a fall it sends an alert using the smartphone via Bluetooth 4.0, to notify the family members or stakeholders. The sensor device includes an inertial unit, a barometer, and a temperature and humidity sensor. The website displays the log of previous falls and enables the configuration of emergency contact numbers. The proposed fall detection system is one of multiple components within a larger project under development that offers a holistic perspective on falls; the complete wearable solution will also feature, among others, physical protection (minimizing the impact of falls that occur).