875 resultados para Intrusion Detection Systems
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
Resumo:
Artificial immune systems have previously been applied to the problem of intrusion detection. The aim of this research is to develop an intrusion detection system based on the function of Dendritic Cells (DCs). DCs are antigen presenting cells and key to the activation of the human immune system, behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA). In algorithmic terms, individual DCs perform multi-sensor data fusion, asynchronously correlating the fused data signals with a secondary data stream. Aggregate output of a population of cells is analysed and forms the basis of an anomaly detection system. In this paper the DCA is applied to the detection of outgoing port scans using TCP SYN packets. Results show that detection can be achieved with the DCA, yet some false positives can be encountered when simultaneously scanning and using other network services. Suggestions are made for using adaptive signals to alleviate this uncovered problem.
Resumo:
Artificial immune systems, more specifically the negative selection algorithm, have previously been applied to intrusion detection. The aim of this research is to develop an intrusion detection system based on a novel concept in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting cells and key to the activation of the human immune system. DCs perform the vital role of combining signals from the host tissue and correlate these signals with proteins known as antigens. In algorithmic terms, individual DCs perform multi-sensor data fusion based on time-windows. The whole population of DCs asynchronously correlates the fused signals with a secondary data stream. The behaviour of human DCs is abstracted to form the DC Algorithm (DCA), which is implemented using an immune inspired framework, libtissue. This system is used to detect context switching for a basic machine learning dataset and to detect outgoing portscans in real-time. Experimental results show a significant difference between an outgoing portscan and normal traffic.
Resumo:
The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.
Resumo:
A rapid and reliable polymerase chain reaction (PCR)-based protocol was developed for detecting zygosity of the 1BL/1RS translocation in hexaploid wheat. The protocol involved a multiplex PCR with 2 pairs of oligonucleotide primers, rye-specific Ris-1 primers, and consensus 5S intergenic spacer (IGS) primers, and digestion of the PCR products with the restriction enzyme, MseI. A small piece of alkali-treated intact leaf tissue is used as a template for the PCR, thereby eliminating the necessity for DNA extraction. The test is simple, highly sensitive, and rapid compared with the other detection systems of 1BS1RS heterozygotes in hexaploid wheat. PCR results were confirmed with AFLP analyses. Diagnostic tests for 1BL/1RS translocation based on Sec-1-specific ELISA, screening for chromosome arm 1RS controlled rust resistance locus Yr9, and the PCR test differed in their ability to detect heterozygotes. The PCR test and rust test detected more heterozygotes than the ELISA test. The PCR test is being used to facilitate S1 family recurrent selection in the Germplasm Enhancement Program of the Australian Northern Wheat Improvement Program. A combination of the PCR zygosity test with other markers currently being implemented in the breeding program makes this test economical for 1BL/1RS characterisation of S1 families.
Resumo:
A gestão de redes informáticas converteu-se num fator vital para uma rede operar de forma eficiente, produtiva e lucrativa. A gestão envolve a monitorização e o controlo dos sistemas para que estes funcionam como o pretendido, ações de configuração, monitorização, reconfiguração dos componentes, são essenciais para o objetivo de melhorar o desempenho, diminuir o tempo de inatividade, melhor a segurança e efetuar contabilização. Paralelamente, a classificação de tráfego é um tema de bastante relevância em várias atividades relacionadas com as redes, tais como a previsão de QoS, segurança, monitorização, contabilização, planeamento de capacidade de backbones e deteção de invasão. A variação de determinados tipos de tráfego pode influenciar decisões técnicas na área da gestão de redes, assim como decisões políticas e sociais. Neste trabalho pretende-se desenvolver um estudo dos vários protocolos, ferramentas de gestão e de classificação de tráfego disponíveis para apoiar a atividade de gestão. O estudo efetuado terminou com a proposta e implementação de uma solução de gestão adequado a um cenário real bastante rico na diversidade de tecnologias e sistemas.
Resumo:
A gestão de redes informáticas converteu-se num fator vital para uma rede operar de forma eficiente, produtiva e lucrativa. A gestão envolve a monitorização e o controlo dos sistemas para que estes funcionam como o pretendido, ações de configuração, monitorização, reconfiguração dos componentes, são essenciais para o objetivo de melhorar o desempenho, diminuir o tempo de inatividade, melhor a segurança e efetuar contabilização. Paralelamente, a classificação de tráfego é um tema de bastante relevância em várias atividades relacionadas com as redes, tais como a previsão de QoS, segurança, monitorização, contabilização, planeamento de capacidade de backbones e deteção de invasão. A variação de determinados tipos de tráfego pode influenciar, decisões técnicas na área da gestão de redes, assim como decisões políticas e sociais. Neste trabalho pretende-se desenvolver um estudo dos vários protocolos, ferramentas de gestão e de classificação de tráfego disponíveis para apoiar a atividade de gestão. O estudo efetuado terminou com a proposta e implementação de uma solução de gestão adequado a um cenário real, bastante rico na diversidade de tecnologias e sistemas.
Resumo:
A gestão e monitorização de redes é uma necessidade fundamental em qualquer organização, quer seja grande ou pequena. A sua importância tem de ser refletida na eficiência e no aumento de informação útil disponível, contribuindo para uma maior eficácia na realização das tarefas em ambientes tecnologicamente avançados, com elevadas necessidades de desempenho e disponibilidade dos recursos dessa tecnologia. Para alcançar estes objetivos é fundamental possuir as ferramentas de gestão de redes adequadas. Nomeadamente ferramentas de monitorização. A classificação de tráfego também se revela fundamental para garantir a qualidade das comunicações e prevenir ataques indesejados aumentando assim a segurança nas comunicações. Paralelamente, principalmente em organizações grandes, é relevante a inventariação dos equipamentos utilizados numa rede. Neste trabalho pretende-se implementar e colocar em funcionamento um sistema autónomo de monitorização, classificação de protocolos e realização de inventários. Todas estas ferramentas têm como objetivo apoiar os administradores e técnicos de sistemas informáticos. Os estudos das aplicações que melhor se adequam à realidade da organização culminaram num acréscimo de conhecimento e aprendizagem que irão contribuir para um melhor desempenho da rede em que o principal beneficiário será o cidadão.
Resumo:
Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.
Resumo:
The development of a repetitive DNA probe for Babesia bigemina was reviewed. The original plasmid (p(Bbi)16) contained an insert of B. bigemina DNA of approximately 6.3 kb. This probe has been evaluated for specificityand analytical sensitivity by dot hybridization with isolates from Mexico, the Caribbean region and Kenya. A partial restriction map has been constructed and insert fragments have been subcloned and utilized as specific DNA probes. A comparison of 32P labelled and non-radioactive DNA probes was presented. Non-radioctive detection systems that have been used include digoxigenin dUTP incorporation, and detection by colorimetric substrate methods. Derivatives from the original DNA probe have been utilized to detect B. bigemina infection in a) experimentally inoculated cattle, b) field exposed cattle, c) infected Boophilus microplus ticks, and d) the development of a PCR amplification system.
Resumo:
The use of in situ techniques to detect DNA and RNA sequences has proven to be an invaluable technique with paraffin-embedded tissue. Advances in non-radioactive detection systems have further made these procedures shorter and safer. We report the detection of Trypanosoma cruzi, the causative agent of Chagas disease, via indirect and direct in situ polymerace chain reaction within paraffin-embedded murine cardiac tissue sections. The presence of three T. cruzi specific DNA sequences were evaluated: a 122 base pair (bp) sequence localized within the minicircle network, a 188 bp satellite nuclear repetitive sequence and a 177 bp sequence that codes for a flagellar protein. In situ hybridization alone was sensitive enough to detect all three T. cruzi specific DNA sequences.
Resumo:
Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyse the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagiarism detection systems. With this aim in mind, we created the P4P corpus, a new resource which uses a paraphrase typology to annotate a subset of the PAN-PC-10 corpus for automatic plagiarism detection. The results of the Second International Competition on Plagiarism Detection were analysed in the light of this annotation. The presented experiments show that (i) more complex paraphrase phenomena and a high density of paraphrase mechanisms make plagiarism detection more difficult, (ii) lexical substitutions are the paraphrase mechanisms used the most when plagiarising, and (iii) paraphrase mechanisms tend to shorten the plagiarized text. For the first time, the paraphrase mechanisms behind plagiarism have been analysed, providing critical insights for the improvement of automatic plagiarism detection systems.
Resumo:
Suomen Viestintävirasto Ficora on antanut määräyksen 13/2005M, jonka mukaan internet-palveluntarjoajalla tulee olla ennalta määritellyt prosessit ja toimintamallit sen omista asiakasliittymistä internetiin lähtevän haitallisen liikenteen havaitsemiseksi ja suodattamiseksi. Määräys ei sinällään aseta ehtoja, kuinka asetetut vaatimukset kukin internet-palveluntarjoaja täyttää. Tässä diplomityössä annetaan määritelmät haitalliselle liikenteelle ja tutkitaan menetelmiä, joilla sitä voidaan havainnoida ja suodattaa paikallisen internet-palveluntarjoajan operaattoriverkoissa. Suhteutettunapaikallisen internet-palveluntarjoajan asiakasliittymien määrään, uhkien vakavuuteen ja tällaisen systeemin kustannuksiin, tullaan tämän työn pohjalta ehdottamaan avoimen lähdekoodin tunkeutumisenhavaitsemistyökalua nopeaa reagointia vaativiin tietoturvaloukkauksiin ja automatisoitua uudelleenreitititystä suodatukseen. Lisäksi normaalin työajan puitteissa tapahtuvaan liikenteen seurantaan suositetaan laajennettua valvontapöytää, jossa tarkemmat tutkimukset voidaan laittaa alulle visualisoitujen reaaliaikaisten tietoliikenneverkon tietovoiden kautta.
Resumo:
Työn keskeisimpänä tavoitteena on tutkia SIEM-järjestelmien (Security Information and Event Management) käyttömahdollisuuksia PCI DSS -standardissa (Payment Card IndustryData Security Standard) lähtökohtaisesti ratkaisutoimittajan näkökulmasta. Työ on tehty Cygate Oy:ssä. SIEM on uusi tietoturvan ratkaisualue, jonka käyttöönottoa vauhdittavat erilaiset viralliset sääntelyt kuten luottokorttiyhtiöiden asettama PCI DSS -standardi. SIEM-järjestelmien avulla organisaatiot pystyvät keräämään valmistajariippumattomasti verkon systeemikomponenteista tapahtumatietoja, joiden avulla pystytään näkemään keskitetysti, mitä verkossa on tapahtunut. SIEM:ssa käsitellään sekä historiapohjaisia että reaaliaikaisia tapahtumia ja se toimii organisaatioiden keskitettynä tietoturvaprosessia tukevana hallintatyökaluna. PCI DSS -standardi on hyvin yksityiskohtainen ja sen vaatimusten täyttäminen ei ole yksinkertaista. Vaatimuksenmukaisuutta ei saavuteta hetkessä, vaan siihen liittyvä projekti voi kestää viikoista kuukausiin. Standardin yksi haasteellisimmista asioista on keskitetty lokien hallinta. Maksukorttitietoja käsittelevien ja välittävien organisaatioiden on kerättävä kaikki audit-lokit eri järjestelmistä, jotta maksukorttitietojen käyttöä pystytään luottamuksellisesti seuraamaan. Standardin mukaan organisaatioiden tulee käyttää myös tunkeutumisen ja haavoittuvuuksien havainnointijärjestelmiä mahdollisten tietomurtojen havaitsemiseksi ja estämiseksi. SIEM-järjestelmän avulla saadaan täytettyä PCI DSS -standardin vaativimpia lokien hallintaan liittyviä vaatimuksia ja se tuo samallamonia yksityiskohtaisia parannuksia tukemaan muita standardin vaatimuskohtia. Siitä voi olla hyötyä mm. tunkeutumisen ja haavoittuvuuksien havainnoinnissa. SIEM-järjestelmän hyödyntäminen standardin apuna on kuitenkin erittäin haasteellista. Käyttöönotto vaatii tarkkaa etukäteissuunnittelua ja kokonaisuuksien ymmärtämistä niin ratkaisutoimittajan kuin ratkaisun käyttöönottajan puolelta.