885 resultados para Information Security Manager
Resumo:
Publisher PDF
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
The information technology - IT- benefits have been more perceived during the last decades. Both IT and business managers are dealing with subjects like governance, IT-Business alignment, information security and others on their top priorities. Talking about governance, specifically, managers are facing it with a technical approach, that gives emphasis on protection against invasions, antivirus systems, access controls and others technical issues. The IT risk management, commonly, is faced under this approach, that means, has its importance reduced and delegated to IT Departments. On the last two decades, a new IT risk management perspective raised, bringing an holistic view of IT risk to the organization. According to this new perspective, the strategies formulation process should take into account the IT risks. With the growing of IT dependence on most of organizations, the necessity of a better comprehension about the subject becomes more clear. This work shows a study in three public organizations of the Pernambuco State that investigates how those organizations manage their IT risks. Structured interviews were made with IT managers, and later, analyzed and compared with conceptual categories found in the literature. The results shows that the IT risks culture and IT governance are weakly understood and implemented on those organizations, where there are not such an IT risk methodology formally defined, neither executed. In addition, most of practices suggested in the literature were found, even without an alignment with an IT risks management process
Resumo:
This study examines the factors that influence public managers in the adoption of advanced practices related to Information Security Management. This research used, as the basis of assertions, Security Standard ISO 27001:2005 and theoretical model based on TAM (Technology Acceptance Model) from Venkatesh and Davis (2000). The method adopted was field research of national scope with participation of eighty public administrators from states of Brazil, all of them managers and planners of state governments. The approach was quantitative and research methods were descriptive statistics, factor analysis and multiple linear regression for data analysis. The survey results showed correlation between the constructs of the TAM model (ease of use, perceptions of value, attitude and intention to use) and agreement with the assertions made in accordance with ISO 27001, showing that these factors influence the managers in adoption of such practices. On the other independent variables of the model (organizational profile, demographic profile and managers behavior) no significant correlation was identified with the assertions of the same standard, witch means the need for expansion researches using such constructs. It is hoped that this study may contribute positively to the progress on discussions about Information Security Management, Adoption of Safety Standards and Technology Acceptance Model
Resumo:
Software protection is an essential aspect of information security to withstand malicious activities on software, and preserving software assets. However, software developers still lacks a methodology for the assessment of the deployed protections. To solve these issues, we present a novel attack simulation based software protection assessment method to assess and compare various protection solutions. Our solution relies on Petri Nets to specify and visualize attack models, and we developed a Monte Carlo based approach to simulate attacking processes and to deal with uncertainty. Then, based on this simulation and estimation, a novel protection comparison model is proposed to compare different protection solutions. Lastly, our attack simulation based software protection assessment method is presented. We illustrate our method by means of a software protection assessment process to demonstrate that our approach can provide a suitable software protection assessment for developers and software companies.
Resumo:
Varmuuskopiointi ja tietoturva suomalaisessa mikroyrityksessä ovat asioita, joihin ei usein kiinnitetä riittävää huomiota puuttuvan osaamisen, kiireen tai liian vähäisten resurssien takia. Tietoturva on valittu erääksi työn tutkimusaiheeksi, koska se on ajankohtainen ja paljon puhuttu aihe. Toiseksi tutkimusaiheeksi on valittu varmuuskopiointi, sillä se liittyy hyvin vahvasti tietoturvaan ja se on pakollinen toimenpide yrityksen liiketoiminnan jatkuvuuden takaamiseksi. Tässä työssä tutkitaan mikroyrityksen tietoturvaa ja pohditaan, miten sitä voidaan parantaa yksinkertaisilla menetelmillä. Tämän lisäksi tarkastellaan mikroyrityksen varmuuskopiointia ja siihen liittyviä asioita ja ongelmia. Työn tavoitteena on tietoturvan ja varmuuskopioinnin tutkiminen yleisellä tasolla sekä useamman varmuuskopiointiratkaisuvaihtoehdon luominen kirjallisuuden ja teorian pohjalta. Työssä tarkastellaan yrityksen tietoturvaa ja varmuuskopiointia käyttäen hyväksi kuvitteellista malliyritystä tutkimusympäristönä, koska tällä tavalla tutkimusympäristö voidaan määritellä ja rajata tarkasti. Koska kyseiset aihealueet ovat varsin laajoja, on työn aihetta rajattu lähinnä varmuuskopiointiin, mahdollisiin tietoturvauhkiin ja tietoturvan tutkimiseen yleisellä tasolla. Tutkimuksen pohjalta on kehitetty kaksi mahdollista paikallisen varmuuskopioinnin ratkaisuvaihtoehtoa ja yksi etävarmuuskopiointiratkaisuvaihtoehto. Paikallisen varmuuskopioinnin ratkaisuvaihtoehdot ovat varmuuskopiointi ulkoiselle kovalevylle ja varmuuskopiointi NAS (Network Attached Storage) -verkkolevypalvelimelle. Etävarmuuskopiointiratkaisuvaihtoehto on varmuuskopiointi etäpalvelimelle, kuten pilvipalveluun. Vaikka NAS-verkkolevypalvelin on paikallisen varmuuskopioinnin ratkaisu, voidaan sitä myös käyttää etävarmuuskopiointiin riippuen laitteen sijainnista. Työssä vertaillaan ja arvioidaan lyhyesti ratkaisuvaihtoehtoja tutkimuksen pohjalta luoduilla arviointikriteereillä. Samalla esitellään pisteytysmalli ratkaisujen arvioinnin ja sopivan ratkaisuvaihtoehdon valitsemisen helpottamiseksi. Jokaisessa ratkaisuvaihtoehdossa on omat hyvät ja huonot puolensa, joten oikean ratkaisuvaihtoehdon valitseminen ei ole aina helppoa. Ratkaisuvaihtoehtojen sopivuus tietylle yritykselle riippuu aina yrityksen omista tarpeista ja vaatimuksista. Koska eri yrityksillä on usein erilaiset vaatimukset ja tarpeet varmuuskopioinnille, voi yritykselle parhaiten sopivan varmuuskopiointiratkaisun löytäminen olla vaikeaa ja aikaa vievää. Tässä työssä esitetyt ratkaisuvaihtoehdot toimivat ohjeena ja perustana mikroyrityksen varmuuskopioinnin suunnittelussa, valinnassa, päätöksen teossa ja järjestelmän rakentamisessa.
Resumo:
Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.
Resumo:
Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.
Resumo:
Tämän tutkimuksen päätavoitteena oli luoda laskentamalli identiteetin- ja käyttöoikeuksien hallintajärjestelmien kustannus- ja tulosvaikutuksista. Mallin tarkoitus oli toimia järjestelmätoimittajien apuvälineenä, jolla mahdolliset asiakkaat voidaan paremmin vakuuttaa järjestelmän kustannushyödyistä myyntitilanteessa. Vastaavia kustannusvaikutuksia mittaavia malleja on rakennettu hyvin vähän, ja tässä tutkimuksessa rakennettu malli eroaa niistä sekä järjestelmätoimittajan työkustannusten että tietoturvariskien huomioimisen osalta. Laskentamallin toimivuuden todentamiseksi syntynyttä laskentamallia testattiin kahdessa yrityksessä, joiden käytössä on keskitetty identiteetinhallintajärjestelmä. Testaus suoritettiin syöttämällä yrityksen tiedot laskentamalliin ja vertaamalla mallin antamia tuloksia yrityksen havaitsemiin kustannusvaikutuksiin. Sekä kirjallisuuskatsauksen että laskentamallin testaamisen perusteella voidaan todeta, että identiteetinhallintaprosessin merkittävimmät kustannustekijät ovat identiteettien luomiseen ja muutoksiin kuluva työaika sekä näiden toimintojen aiheuttama työntekijän tehokkuuden laskeminen prosessin aikana. Tutkimuksen perusteella keskitettyjen identiteetinhallintajärjestelmien avulla on mahdollista saavuttaa merkittäviä kustannussäästöjä identiteetinhallintaprosessin toiminnoista, lisenssikustannuksista sekä IT-palvelukustannuksista. Kaikki kustannussäästöt eivät kuitenkaan ole konkreettisia, vaan liittyvät esimerkiksi työtehokkuuden nousemiseen järjestelmän ansiosta. Kustannusvaikutusten lisäksi identiteetinhallintajärjestelmät tarjoavat muita hyötyjä, joiden rahallisen arvon laskeminen on erittäin haastavaa. Laskentamallin käytön haasteina ovatkin konkreettisten ja epäsuorien kustannussäästöjen tunnistaminen ja arvottaminen sekä investoinnin kokonaishyötyjen arvioinnin vaikeus.
Resumo:
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.
Resumo:
The immune system provides an ideal metaphor for anomaly detection in general and computer security in particular. Based on this idea, artificial immune systems have been used for a number of years for intrusion detection, unfortunately so far with little success. However, these previous systems were largely based on immunological theory from the 1970s and 1980s and over the last decade our understanding of immunological processes has vastly improved. In this paper we present two new immune inspired algorithms based on the latest immunological discoveries, such as the behaviour of Dendritic Cells. The resultant algorithms are applied to real world intrusion problems and show encouraging results. Overall, we believe there is a bright future for these next generation artificial immune algorithms
Resumo:
Dissertação apresentada à Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica Professor Doutor Osvaldo Arede dos Santos, do Instituto Politécnico de Castelo Branco.
Resumo:
The search for patterns or motifs in data represents an area of key interest to many researchers. In this paper we present the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs which repeat within time series data. The power of the algorithm is derived from its use of a small number of parameters with minimal assumptions. The algorithm searches from a completely neutral perspective that is independent of the data being analysed and the underlying motifs. In this paper the motif tracking algorithm is applied to the search for patterns within sequences of low level system calls between the Linux kernel and the operating system’s user space. The MTA is able to compress data found in large system call data sets to a limited number of motifs which summarise that data. The motifs provide a resource from which a profile of executed processes can be built. The potential for these profiles and new implications for security research are highlighted. A higher level system call language for measuring similarity between patterns of such calls is also suggested.
Resumo:
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.