897 resultados para Malicious software
Resumo:
This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.
Resumo:
Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes. © 2009 SPIE.
Resumo:
Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family
Resumo:
It is our goal within this project to develop a powerful electronic system capable to claim, with high certainty, that a malicious software is running (or not) along with the workstations’ normal activity. The new product will be based on measurement of the supply current taken by a workstation from the grid. Unique technique is proposed within these proceedings that analyses the supply current to produce information about the state of the workstation and to generate information of the presence of malicious software running along with the rightful applications. The testing is based on comparison of the behavior of a fault-free workstation (established i advance) and the behavior of the potentially faulty device.
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
Ensuring the security of computers is a non-trivial task, with many techniques used by malicious users to compromise these systems. In recent years a new threat has emerged in the form of networks of hijacked zombie machines used to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These zombie machines are said to be infected with a dasiahotpsila - a malicious piece of software which is installed on a host machine and is controlled by a remote attacker, termed the dasiabotmaster of a botnetpsila. In this work, we use the biologically inspired dendritic cell algorithm (DCA) to detect the existence of a single hot on a compromised host machine. The DCA is an immune-inspired algorithm based on an abstract model of the behaviour of the dendritic cells of the human body. The basis of anomaly detection performed by the DCA is facilitated using the correlation of behavioural attributes such as keylogging and packet flooding behaviour. The results of the application of the DCA to the detection of a single hot show that the algorithm is a successful technique for the detection of such malicious software without responding to normally running programs.
Resumo:
Malicious users try to compromise systems using new techniques. One of the recent techniques used by the attacker is to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These compromised machines are said to be infected with malicious software termed a “bot”. In this paper, we investigate the correlation of behavioural attributes such as keylogging and packet flooding behaviour to detect the existence of a single bot on a compromised machine by applying (1) Spearman’s rank correlation (SRC) algorithm and (2) the Dendritic Cell Algorithm (DCA). We also compare the output results generated from these two methods to the detection of a single bot. The results show that the DCA has a better performance in detecting malicious activities.
Resumo:
Os Sistemas de Detecção e Prevenção de Intrusão (Intrusion Detection Systems – IDS e Intrusion Prevention Systems - IPS) são ferramentas bastante conhecidas e bem consagradas no mundo da segurança da informação. Porém, a falta de integração com os equipamentos de rede como switches e roteadores acaba limitando a atuação destas ferramentas e exige um bom dimensionamento de recursos de hardware como processamento, memória e interfaces de rede de alta velocidade, utilizados para implementá-las. Diante de diversas limitações deparadas por pesquisadores e administradores de redes, surgiu o conceito de Rede Definida por Software (Software Defined Network – SDN), que ao separar os planos de controle e de dados, permite adaptar o funcionamento da rede de acordo com as necessidades de cada um. Desta forma, devido à padronização e flexibilidade propostas pelas SDNs, e das limitações apresentadas dos IPSs, esta dissertação de mestrado propõe o IPSFlow, um framework que utiliza uma rede baseada na arquitetura SDN e o protocolo OpenFlow para a criação de um IPS com ampla cobertura e que permite bloquear um tráfego caracterizado pelos IDS(s) como malicioso no equipamento mais próximo da origem. Para validar o framework, experimentos no ambiente virtual Mininet foram realizados utilizando-se o Snort como IDS para analisar tráfego de varredura (scan) gerado pelo Nmap de um host ao outro. Os resultados coletados apresentam que o IPSFlow funcionou conforme planejado ao efetuar o bloqueio de 85% do tráfego de varredura.
Resumo:
One of the most undervalued problems by smartphone users is the security of data on their mobile devices. Today smartphones and tablets are used to send messages and photos and especially to stay connected with social networks, forums and other platforms. These devices contain a lot of private information like passwords, phone numbers, private photos, emails, etc. and an attacker may choose to steal or destroy this information. The main topic of this thesis is the security of the applications present on the most popular stores (App Store for iOS and Play Store for Android) and of their mechanisms for the management of security. The analysis is focused on how the architecture of the two systems protects users from threats and highlights the real presence of malware and spyware in their respective application stores. The work described in subsequent chapters explains the study of the behavior of 50 Android applications and 50 iOS applications performed using network analysis software. Furthermore, this thesis presents some statistics about malware and spyware present on the respective stores and the permissions they require. At the end the reader will be able to understand how to recognize malicious applications and which of the two systems is more suitable for him. This is how this thesis is structured. The first chapter introduces the security mechanisms of the Android and iOS platform architectures and the security mechanisms of their respective application stores. The Second chapter explains the work done, what, why and how we have chosen the tools needed to complete our analysis. The third chapter discusses about the execution of tests, the protocol followed and the approach to assess the “level of danger” of each application that has been checked. The fourth chapter explains the results of the tests and introduces some statistics on the presence of malicious applications on Play Store and App Store. The fifth chapter is devoted to the study of the users, what they think about and how they might avoid malicious applications. The sixth chapter seeks to establish, following our methodology, what application store is safer. In the end, the seventh chapter concludes the thesis.
Resumo:
El presente Trabajo de Fin de Grado (TFG) es el resultado de la necesidad de la seguridad en la construcción del software ya que es uno de los mayores problemas con que se enfrenta hoy la industria debido a la baja calidad de la misma tanto en software de Sistema Operativo, como empotrado y de aplicaciones. La creciente dependencia de software para que se hagan trabajos críticos significa que el valor del software ya no reside únicamente en su capacidad para mejorar o mantener la productividad y la eficiencia. En lugar de ello, su valor también se deriva de su capacidad para continuar operando de forma fiable incluso de cara de los eventos que la amenazan. La capacidad de confiar en que el software seguirá siendo fiable en cualquier circunstancia, con un nivel de confianza justificada, es el objetivo de la seguridad del software. Seguridad del software es importante porque muchas funciones críticas son completamente dependientes del software. Esto hace que el software sea un objetivo de valor muy alto para los atacantes, cuyos motivos pueden ser maliciosos, penales, contenciosos, competitivos, o de naturaleza terrorista. Existen fuentes muy importantes de mejores prácticas, métodos y herramientas para mejorar desde los requisitos en sus aspectos no funcionales, ciclo de vida del software seguro, pasando por la dirección de proyectos hasta su desarrollo, pruebas y despliegue que debe ser tenido en cuenta por los desarrolladores. Este trabajo se centra fundamentalmente en elaborar una guía de mejores prácticas con la información existente CERT, CMMI, Mitre, Cigital, HP, y otras fuentes. También se plantea desarrollar un caso práctico sobre una aplicación dinámica o estática con el fin de explotar sus vulnerabilidades.---ABSTRACT---This Final Project Grade (TFG) is the result of the need for security in software construction as it is one of the biggest problems facing the industry today due to the low quality of it both OS software, embedded software and applications software. The increasing reliance on software for critical jobs means that the value of the software no longer resides solely in its capacity to improve or maintain productivity and efficiency. Instead, its value also stems from its ability to continue to operate reliably even when facing events that threaten it. The ability to trust that the software will remain reliable in all circumstances, with justified confidence level is the goal of software security. The security in software is important because many critical functions are completely dependent of the software. This makes the software to be a very high value target for attackers, whose motives may be by a malicious, by crime, for litigating, by competitiveness or by a terrorist nature. There are very important sources of best practices, methods and tools to improve the requirements in their non-functional aspects, the software life cycle with security in mind, from project management to its phases (development, testing and deployment) which should be taken into account by the developers. This paper focuses primarily on developing a best practice guide with existing information from CERT, CMMI, Mitre, Cigital, HP, and other organizations. It also aims to develop a case study on a dynamic or static application in order to exploit their vulnerabilities.
Resumo:
Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.
Resumo:
Software protection is an essential aspect of information security to withstand malicious activities on software, and preserving software assets. However, software developers still lacks a methodology for the assessment of the deployed protections. To solve these issues, we present a novel attack simulation based software protection assessment method to assess and compare various protection solutions. Our solution relies on Petri Nets to specify and visualize attack models, and we developed a Monte Carlo based approach to simulate attacking processes and to deal with uncertainty. Then, based on this simulation and estimation, a novel protection comparison model is proposed to compare different protection solutions. Lastly, our attack simulation based software protection assessment method is presented. We illustrate our method by means of a software protection assessment process to demonstrate that our approach can provide a suitable software protection assessment for developers and software companies.
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Resumo:
This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.
Resumo:
Thousands of Free and Open Source Software Projects (FSP) were, and continually are, created on the Internet. This scenario increases the number of opportunities to collaborate to the same extent that it promotes competition for users and contributors, who can guide projects to superior levels, unachievable by founders alone. Thus, given that the main goal of FSP founders is to improve their projects by means of collaboration, the importance to understand and manage the capacity of attracting users and contributors to the project is established. To support researchers and founders in this challenge, the concept of attractiveness is introduced in this paper, which develops a theoretical-managerial toolkit about the causes, indicators and consequences of attractiveness, enabling its strategic management.