39 resultados para obfuscation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

N-gram analysis is an approach that investigates the structure of a program using bytes, characters, or text strings. A key issue with N-gram analysis is feature selection amidst the explosion of features that occurs when N is increased. The experiments within this paper represent programs as operational code (opcode) density histograms gained through dynamic analysis. A support vector machine is used to create a reference model, which is used to evaluate two methods of feature reduction, which are 'area of intersect' and 'subspace analysis using eigenvectors.' The findings show that the relationships between features are complex and simple statistics filtering approaches do not provide a viable approach. However, eigenvector subspace analysis produces a suitable filter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new encryption scheme implemented at the physical layer of wireless networks employing orthogonal frequency-division multiplexing (OFDM). The new scheme obfuscates the subcarriers by randomly reserving several subcarriers for dummy data and resequences the training symbol by a new secure sequence. Subcarrier obfuscation renders the OFDM transmission more secure and random, while training symbol resequencing protects the entire physical layer packet, but does not affect the normal functions of synchronization and channel estimation of legitimate users while preventing eavesdroppers from performing these functions. The security analysis shows the system is robust to various attacks by analyzing the search space using an exhaustive key search. Our scheme is shown to have a better performance in terms of search space, key rate and complexity in comparison with other OFDM physical layer encryption schemes. The scheme offers options for users to customize the security level and key rate according to the hardware resource. Its low complexity nature also makes the scheme suitable for resource limited devices. Details of practical design considerations are highlighted by applying the approach to an IEEE 802.11 OFDM system case study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire analyse trois réformes majeures de politique sociale en Turquie, en deux domaines: emploi et sécurité social. En utilisant l'approche "Usage de l'Europe", ce mémoire developpe une analyse empirique et apporte une explication théorique de ces changements qui ont été introduits au cours du processus d'adhésion de la Turquie à l'Union européenne. "Les usages de l'Europe" est une approche d'européanisation qui se concentre sur le rôle des acteurs domestiques, au sein des États membres et candidats, ainsi que de leur utilisation des ressources de l'Union européenne. Les études de cas utilisées dans cette thèse démontrent l'introduction de changements au niveau de l'État-providence; ainsi, l'approche originelle est suppléée par des concepts provenant de la littérature sur la politique partisane, les institutions formelles et l'héritage des politiques. Cette recherche utilise la méthode de l'analyse de processus pour suivre la réforme des règlements du travail par la voie de reconstitution des droits individuels des travailleurs et de l'Agence d'emploi en Turquie jusqu'en 2003, ainsi que la transformation du système de sécurité sociale en 2008. Ces trois réformes représentent des changements majeurs tant sur le plan institutionnel que politique en Turquie depuis 2001. Afin de comprendre "les usages de l'Europe" dans ces réformes politiques, l'analyse empirique questionne, si, quand et comment les acteurs turcs ont utilisé les ressources, les références et les développements politiques de l'Union européenne lors de ce processus dynamique de réforme. Les réformes du système de sécurité sociale, des règlements du travail, en plus de la reconstitution de l'Agence d'emploi étaient à l'ordre du jour en Turquie depuis les années 1990. La réforme des règlements du travail ont entraîné l'introduction des accommodements flexibles au travail et une révision de la Loi du travail permettant l'établissement d'une législation de la sécurité d'emploi. La reconstitution de l'Agence d'emploi visait à remplacer la vieille institution défunte par une institution moderne afin d'introduire des politiques d'activation. La réforme de sécurité sociale comprend les pensions de retraite, le système de santé ainsi que l'administration des institutions de sécurité sociale. Les principaux résultats révèlent que la provision des ressources de l'Union européenne en Turquie a augmenté à partir de la reconnaissance de sa candidature en 1999 et ce, jusqu'au lancement des négociations pour son adhésion en 2005; ce qui fut une occasion favorable pour les acteurs domestiques impliqués dans les processus de réformes. Cependant, à l'encontre de certaines attentes originelles de l'approche de "les usages de l'Europe", les résultats de cette recherche démontrent que le temps et le sort de "les usages de l'Europe" dépendent des intérêts des acteurs domestiques, ainsi de leurs stratégies tout au long de ce processus de réforme, plutôt que des phases du processus ou la quantité des ressources fournies par l'Union européenne.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SQL injection vulnerabilities poses a severe threat to web applications as an SQL Injection Attack (SQLIA) could adopt new obfuscation techniques to evade and thwart countermeasures such as Intrusion Detection Systems (IDS). SQLIA gains access to the back-end database of vulnerable websites, allowing hackers to execute SQL commands in a web application resulting in financial fraud and website defacement. The lack of existing models in providing protections against SQL injection has motivated this paper to present a new and enhanced model against web database intrusions that use SQLIA techniques. In this paper, we propose a novel concept of negative tainting along with SQL keyword analysis for preventing SQLIA and described our that we implemented. We have tested our proposed model on all types of SQLIA techniques by generating SQL queries containing legitimate SQL commands and SQL Injection Attack. Evaluations have been performed using three different applications. The results show that our model protects against 100% of tested attacks before even reaching the database layer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k−Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO – PolyKernel, SMO – Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cybercrime has rapidly developed in recent years and malware is one of the major security threats in computer which have been in existence from the very early days. There is a lack of understanding of such malware threats and what mechanisms can be used in implementing security prevention as well as to detect the threat. The main contribution of this paper is a step towards addressing this by investigating the different techniques adopted by obfuscated malware as they are growingly widespread and increasingly sophisticated with zero-day exploits. In particular, by adopting certain effective detection methods our investigations show how cybercriminals make use of file system vulnerabilities to inject hidden malware into the system. The paper also describes the recent trends of Zeus botnets and the importance of anomaly detection to be employed in addressing the new Zeus generation of malware.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detecting malicious software or malware is one of the major concerns in information security governance as malware authors pose a major challenge to digital forensics by using a variety of highly sophisticated stealth techniques to hide malicious code in computing systems, including smartphones. The current detection techniques are futile, as forensic analysis of infected devices is unable to identify all the hidden malware, thereby resulting in zero day attacks. This chapter takes a key step forward to address this issue and lays foundation for deeper investigations in digital forensics. The goal of this chapter is, firstly, to unearth the recent obfuscation strategies employed to hide malware. Secondly, this chapter proposes innovative techniques that are implemented as a fully-automated tool, and experimentally tested to exhaustively detect hidden malware that leverage on system vulnerabilities. Based on these research investigations, the chapter also arrives at an information security governance plan that would aid in addressing the current and future cybercrime situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web applications have steadily increased, making them very important in areas, such as financial sectors, e-commerce, e-government, social media network, medical data, e-business, academic an activities, e-banking, e-shopping, e-mail. However, web application pages support users interacting with the data stored in their website to insert, delete and modify content by making a web site their own space. Unfortunately, these activities attracted writers of malicious software for financial gain, and to take advantage of such activities to perform their malicious objectives. This chapter focuses on severe threats to web applications specifically on Structure Query Language Injection Attack (SQLIA) and Zeus threats. These threats could adopt new obfuscation techniques to evade and thwart countermeasures Intrusion Detection Systems (IDS). Furthermore, this work explores and discusses the techniques to detect and prevent web application malwar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes. © 2009 SPIE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations is presented. Such binaries may use operators to directly manipulate stack instead of using native call and ret instructions to achieve equivalent behavior. Since definition of context-sensitivity and algorithms for context-sensitive analysis have thus far been based on the specific semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in 'calling'-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control flow graph (ICFG), the same is not true of changes in 'stack'-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the context-sensitive version of the algorithm generates more precise results and is also computationally more efficient than its context-insensitive counterpart. Copyright © 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of 'valid' paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli's interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.'s algorithm for analyzing x86 binaries without requiring that a binary conforms to a standard compilation model for maintaining procedures, calls, and returns. Experimental results show that a context-sensitive analysis using stack-context performs just as well for programs where the use of Sharir and Pnueli's calling-context produces correct approximations. However, if those programs are transformed to use call obfuscations, a contextsensitive analysis using stack-context still provides the same, correct results and without any additional overhead. © Springer Science+Business Media, LLC 2011.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is very often the case that programs require passing, maintaining, and updating some notion of state. Prolog programs often implement such stateful computations by carrying this state in predicate arguments (or, alternatively, in the internal datábase). This often causes code obfuscation, complicates code reuse, introduces dependencies on the data model, and is prone to incorrect propagation of the state information among predicate calis. To partly solve these problems, we introduce contexts as a consistent mechanism for specifying implicit arguments and its threading in clause goals. We propose a notation and an interpretation for contexts, ranging from single goals to complete programs, give an intuitive semantics, and describe a translation into standard Prolog. We also discuss a particular light-weight implementation in Ciao Prolog, and we show the usefulness of our proposals on a series of examples and applications, including code directiy using contexts, DCGs, extended DCGs, logical loops and other custom control structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The introduction of agent technology raises several security issues that are beyond conventional security mechanisms capability and considerations, but research in protecting the agent from malicious host attack is evolving. This research proposes two approaches to protecting an agent from being attacked by a malicious host. The first approach consists of an obfuscation algorithm that is able to protect the confidentiality of an agent and make it more difficult for a malicious host to spy on the agent. The algorithm uses multiple polynomial functions with multiple random inputs to convert an agent's critical data to a value that is meaningless to the malicious host. The effectiveness of the obfuscation algorithm is enhanced by addition of noise code. The second approach consists of a mechanism that is able to protect the integrity of the agent using state information, recorded during the agent execution process in a remote host environment, to detect a manipulation attack by a malicious host. Both approaches are implemented using a master-slave agent architecture that operates on a distributed migration pattern. Two sets of experimental test were conducted. The first set of experiments measures the migration and migration+computation overheads of the itinerary and distributed migration patterns. The second set of experiments is used to measure the security overhead of the proposed approaches. The protection of the agent is assessed by analysis of its effectiveness under known attacks. Finally, an agent-based application, known as Secure Flight Finder Agent-based System (SecureFAS) is developed, in order to prove the function of the proposed approaches.