18 resultados para Intrusion Detection, Computer Security, Misuse

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile malwares are increasing with the growing number of Mobile users. Mobile malwares can perform several operations which lead to cybersecurity threats such as, stealing financial or personal information, installing malicious applications, sending premium SMS, creating backdoors, keylogging and crypto-ransomware attacks. Knowing the fact that there are many illegitimate Applications available on the App stores, most of the mobile users remain careless about the security of their Mobile devices and become the potential victim of these threats. Previous studies have shown that not every antivirus is capable of detecting all the threats; due to the fact that Mobile malwares use advance techniques to avoid detection. A Network-based IDS at the operator side will bring an extra layer of security to the subscribers and can detect many advanced threats by analyzing their traffic patterns. Machine Learning(ML) will provide the ability to these systems to detect unknown threats for which signatures are not yet known. This research is focused on the evaluation of Machine Learning classifiers in Network-based Intrusion detection systems for Mobile Networks. In this study, different techniques of Network-based intrusion detection with their advantages, disadvantages and state of the art in Hybrid solutions are discussed. Finally, a ML based NIDS is proposed which will work as a subsystem, to Network-based IDS deployed by Mobile Operators, that can help in detecting unknown threats and reducing false positives. In this research, several ML classifiers were implemented and evaluated. This study is focused on Android-based malwares, as Android is the most popular OS among users, hence most targeted by cyber criminals. Supervised ML algorithms based classifiers were built using the dataset which contained the labeled instances of relevant features. These features were extracted from the traffic generated by samples of several malware families and benign applications. These classifiers were able to detect malicious traffic patterns with the TPR upto 99.6% during Cross-validation test. Also, several experiments were conducted to detect unknown malware traffic and to detect false positives. These classifiers were able to detect unknown threats with the Accuracy of 97.5%. These classifiers could be integrated with current NIDS', which use signatures, statistical or knowledge-based techniques to detect malicious traffic. Technique to integrate the output from ML classifier with traditional NIDS is discussed and proposed for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Suomen Viestintävirasto Ficora on antanut määräyksen 13/2005M, jonka mukaan internet-palveluntarjoajalla tulee olla ennalta määritellyt prosessit ja toimintamallit sen omista asiakasliittymistä internetiin lähtevän haitallisen liikenteen havaitsemiseksi ja suodattamiseksi. Määräys ei sinällään aseta ehtoja, kuinka asetetut vaatimukset kukin internet-palveluntarjoaja täyttää. Tässä diplomityössä annetaan määritelmät haitalliselle liikenteelle ja tutkitaan menetelmiä, joilla sitä voidaan havainnoida ja suodattaa paikallisen internet-palveluntarjoajan operaattoriverkoissa. Suhteutettunapaikallisen internet-palveluntarjoajan asiakasliittymien määrään, uhkien vakavuuteen ja tällaisen systeemin kustannuksiin, tullaan tämän työn pohjalta ehdottamaan avoimen lähdekoodin tunkeutumisenhavaitsemistyökalua nopeaa reagointia vaativiin tietoturvaloukkauksiin ja automatisoitua uudelleenreitititystä suodatukseen. Lisäksi normaalin työajan puitteissa tapahtuvaan liikenteen seurantaan suositetaan laajennettua valvontapöytää, jossa tarkemmat tutkimukset voidaan laittaa alulle visualisoitujen reaaliaikaisten tietoliikenneverkon tietovoiden kautta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn keskeisimpänä tavoitteena on tutkia SIEM-järjestelmien (Security Information and Event Management) käyttömahdollisuuksia PCI DSS -standardissa (Payment Card IndustryData Security Standard) lähtökohtaisesti ratkaisutoimittajan näkökulmasta. Työ on tehty Cygate Oy:ssä. SIEM on uusi tietoturvan ratkaisualue, jonka käyttöönottoa vauhdittavat erilaiset viralliset sääntelyt kuten luottokorttiyhtiöiden asettama PCI DSS -standardi. SIEM-järjestelmien avulla organisaatiot pystyvät keräämään valmistajariippumattomasti verkon systeemikomponenteista tapahtumatietoja, joiden avulla pystytään näkemään keskitetysti, mitä verkossa on tapahtunut. SIEM:ssa käsitellään sekä historiapohjaisia että reaaliaikaisia tapahtumia ja se toimii organisaatioiden keskitettynä tietoturvaprosessia tukevana hallintatyökaluna. PCI DSS -standardi on hyvin yksityiskohtainen ja sen vaatimusten täyttäminen ei ole yksinkertaista. Vaatimuksenmukaisuutta ei saavuteta hetkessä, vaan siihen liittyvä projekti voi kestää viikoista kuukausiin. Standardin yksi haasteellisimmista asioista on keskitetty lokien hallinta. Maksukorttitietoja käsittelevien ja välittävien organisaatioiden on kerättävä kaikki audit-lokit eri järjestelmistä, jotta maksukorttitietojen käyttöä pystytään luottamuksellisesti seuraamaan. Standardin mukaan organisaatioiden tulee käyttää myös tunkeutumisen ja haavoittuvuuksien havainnointijärjestelmiä mahdollisten tietomurtojen havaitsemiseksi ja estämiseksi. SIEM-järjestelmän avulla saadaan täytettyä PCI DSS -standardin vaativimpia lokien hallintaan liittyviä vaatimuksia ja se tuo samallamonia yksityiskohtaisia parannuksia tukemaan muita standardin vaatimuskohtia. Siitä voi olla hyötyä mm. tunkeutumisen ja haavoittuvuuksien havainnoinnissa. SIEM-järjestelmän hyödyntäminen standardin apuna on kuitenkin erittäin haasteellista. Käyttöönotto vaatii tarkkaa etukäteissuunnittelua ja kokonaisuuksien ymmärtämistä niin ratkaisutoimittajan kuin ratkaisun käyttöönottajan puolelta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Käyttäjien tunnistaminen tietojärjestelmissä on ollut yksi tietoturvan kulmakivistä vuosikymmenten ajan. Ajatus käyttäjätunnuksesta ja salasanasta on kaikkein kustannustehokkain ja käytetyin tapa säilyttää luottamus tietojärjestelmän ja käyttäjien välillä. Tietojärjestelmien käyttöönoton alkuaikoina, jolloin yrityksissä oli vain muutamia tietojärjestelmiä ja niitä käyttivät vain pieni ryhmä käyttäjiä, tämä toimintamalli osoittautui toimivaksi. Vuosien mittaan järjestelmien määrä kasvoi ja sen mukana kasvoi salasanojen määrä ja monimuotoisuus. Kukaan ei osannut ennustaa, kuinka paljon salasanoihin liittyviä ongelmia käyttäjät kohtaisivat ja kuinka paljon ne tulisivat ruuhkauttamaan yritysten käyttäjätukea ja minkälaisia tietoturvariskejä salasanat tulisivat aiheuttamaan suurissa yrityksissä. Tässä diplomityössä tarkastelemme salasanojen aiheuttamia ongelmia suuressa, globaalissa yrityksessä. Ongelmia tarkastellaan neljästä eri näkökulmasta; ihmiset, teknologia, tietoturva ja liiketoiminta. Ongelmat osoitetaan esittelemällä tulokset yrityksen työntekijöille tehdystä kyselystä, joka toteutettiin osana tätä diplomityötä. Ratkaisu näihin ongelmiin esitellään keskitetyn salasanojenhallintajärjestelmän muodossa. Järjestelmän eri ominaisuuksia arvioidaan ja kokeilu -tyyppinen toteutus rakennetaan osoittamaan tällaisen järjestelmän toiminnallisuus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet on levinnyt viime vuosina voimakkaasti ja samalla sen tietoturvallisuus on noussut erittäin tärkeäksi tekijäksi. Olen tietoturvallisuuteen liittyvien työtehtävieni ohessa kirjoittanut keräämääni teoriatietoa ja tekemiäni toimenpiteitä diplomityön muotoon. Diplomityön tavoitteena on antaa yleiskuva internet-palveluntarjoajan tietoturvallisuudesta ja sen hallinnasta. Työssä käsitellään tietoturvallisuuskäsitettä, hallinnollista ja teknistä tietoturvallisuutta sekä tietoturvallisuuden taloudellisia vaikutuksia. Tietoturvallisuus on yrityksen henkilökunnan ja teknisten ratkaisujen yhteistoimintaa yrityksen tietojen turvaamiseksi sekä yrityksen sisäisten että yrityksen ulkopuolisten uhkien varalta. Tietoturvallisuus on niin vahva kuin sen heikoin lenkki, joten tietoturvallisuuden huolehtimiseen tarvitaan koko yrityksen henkilöstön panosta. Työn tavoitteena on parantaa kohdeyrityksen tietoturvallisuutta ja tuoda esiin ne heikot kohdat, joiden parantamiseen täytyy saada yrityksen johdolta lisää resursseja. Aukotonta turvallisuutta ei kannata tavoitella, vaan yrityksen tulee pyrkiä minimoimaan tietoturvallisuuden kokonaiskustannukset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies techniques used for detection of distributed denial of service attacks which during last decade became one of the most serious network security threats. To evaluate different detection algorithms and further improve them we need to test their performance under conditions as close to real-life situations as possible. Currently the only feasible solution for large-scale tests is the simulated environment. The thesis describes implementation of recursive non-parametric CUSUM algorithm for detection of distributed denial of service attacks in ns-2 network simulator – a standard de-facto for network simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increase of computational power and emergence of new computer technologies led to popularity of local communications between personal trusted devices. By-turn, it led to emergence of security problems related to user data utilized in such communications. One of the main aspects of the data security assurance is security of software operating on mobile devices. The aim of this work was to analyze security threats to PeerHood, software intended for performing personal communications between mobile devices regardless of underlying network technologies. To reach this goal, risk-based software security testing was performed. The results of the testing showed that the project has several security vulnerabilities. So PeerHood cannot be considered as a secure software. The analysis made in the work is the first step towards the further implementation of PeerHood security mechanisms, as well as taking into account security in the development process of this project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leveraging cloud services, companies and organizations can significantly improve their efficiency, as well as building novel business opportunities. Cloud computing offers various advantages to companies while having some risks for them too. Advantages offered by service providers are mostly about efficiency and reliability while risks of cloud computing are mostly about security problems. Problems with security of the cloud still demand significant attention in order to tackle the potential problems. Security problems in the cloud as security problems in any area of computing, can not be fully tackled. However creating novel and new solutions can be used by service providers to mitigate the potential threats to a large extent. Looking at the security problem from a very high perspective, there are two focus directions. Security problems that threaten service user’s security and privacy are at one side. On the other hand, security problems that threaten service provider’s security and privacy are on the other side. Both kinds of threats should mostly be detected and mitigated by service providers. Looking a bit closer to the problem, mitigating security problems that target providers can protect both service provider and the user. However, the focus of research community mostly is to provide solutions to protect cloud users. A significant research effort has been put in protecting cloud tenants against external attacks. However, attacks that are originated from elastic, on-demand and legitimate cloud resources should still be considered seriously. The cloud-based botnet or botcloud is one of the prevalent cases of cloud resource misuses. Unfortunately, some of the cloud’s essential characteristics enable criminals to form reliable and low cost botclouds in a short time. In this paper, we present a system that helps to detect distributed infected Virtual Machines (VMs) acting as elements of botclouds. Based on a set of botnet related system level symptoms, our system groups VMs. Grouping VMs helps to separate infected VMs from others and narrows down the target group under inspection. Our system takes advantages of Virtual Machine Introspection (VMI) and data mining techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of security violations is increasing and a security breach could have irreversible impacts to business. There are several ways to improve organization security, but some of them may be difficult to comprehend. This thesis demystifies threat modeling as part of secure system development. Threat modeling enables developers to reveal previously undetected security issues from computer systems. It offers a structured approach for organizations to find and address threats against vulnerabilities. When implemented correctly threat modeling will reduce the amount of defects and malicious attempts against the target environment. In this thesis Microsoft Security Development Lifecycle (SDL) is introduced as an effective methodology for reducing defects in the target system. SDL is traditionally meant to be used in software development, principles can be however partially adapted to IT-infrastructure development. Microsoft threat modeling methodology is an important part of SDL and it is utilized in this thesis to find threats from the Acme Corporation’s factory environment. Acme Corporation is used as a pseudonym for a company providing high-technology consumer electronics. Target for threat modeling is the IT-infrastructure of factory’s manufacturing execution system. Microsoft threat modeling methodology utilizes STRIDE –mnemonic and data flow diagrams to find threats. Threat modeling in this thesis returned results that were important for the organization. Acme Corporation now has more comprehensive understanding concerning IT-infrastructure of the manufacturing execution system. On top of vulnerability related results threat modeling provided coherent views of the target system. Subject matter experts from different areas can now agree upon functions and dependencies of the target system. Threat modeling was recognized as a useful activity for improving security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.