963 resultados para Malicious Node Detection
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
The authors propose a three-node full diversity cooperative protocol, which allows the retransmission of all symbols. By allowing multiple nodes to transmit simultaneously, relaying transmission only consumes limited bandwidth resource. To facilitate the performance analysis of the proposed cooperative protocol, the lower and upper bounds of the outage probability are first developed, and then the high signal-to-noise ratio behaviour is studied. Our analytical results show that the proposed strategy can achieve full diversity. To achieve the performance gain promised by the cooperative diversity, at the relays decode-and-forward strategy is adopted and an iterative soft-interference-cancellation minimum mean-squared error equaliser is developed. The simulation results compare the bit-error-rate performance of the proposed protocol with the non-cooperative scheme and the scheme presented by Azarian et al. ( 2005).
Resumo:
The Intrusion Detection System (IDS) is a common means of protecting networked systems from attack or malicious misuse. The deployment of an IDS can take many different forms dependent on protocols, usage and cost. This is particularly true of Wireless Intrusion Detection Systems (WIDS) which have many detection challenges associated with data transmission through an open, shared medium, facilitated by fundamental changes at the Physical and MAC layers. WIDS need to be considered in more detail at these lower layers than their wired counterparts as they face unique challenges. The remainder of this chapter will investigate three of these challenges where WiFi deviates significantly from that of wired counterparts:
• Attacks Specific to WiFi Networks: Outlining the additional threats which WIDS must account for: Denial of Service, Encryption Bypass and AP Masquerading attacks.
• The Effect of Deployment Architecture on WIDS Performance: Demonstrating that the deployment environment of a network protected by a WIDS can influence the prioritisation of attacks.
• The Importance of Live Data in WiFi Research: Investigating the different choices for research data sources with an emphasis on encouraging live network data collection for future WiFi research.
Resumo:
The IDS (Intrusion Detection System) is a common means of protecting networked systems from attack or malicious misuse. The development and rollout of an IDS can take many different forms in terms of equipment, protocols, connectivity, cost and automation. This is particularly true of WIDS (Wireless Intrusion Detection Systems) which have many more opportunities and challenges associated with data transmission through an open, shared medium.
The operation of a WIDS is a multistep process from origination of an attack through to human readable evaluation. Attention to the performance of each of the processes in the chain from attack detection to evaluation is imperative if an optimum solution is to be sought. At present, research focuses very much on each discrete aspect of a WIDS with little consideration to the operation of the whole system. Taking a holistic view of the technology shows the interconnectivity and inter-dependence between stages, leading to improvements and novel research areas for investigation.
This chapter will outline the general structure of Wireless Intrusion Detection Systems and briefly describe the functions of each development stage, categorised into the following 6 areas:
• Threat Identification,
• Architecture,
• Data Collection,
• Intrusion Detection,
• Alert Correlation,
• Evaluation.
These topics will be considered in broad terms designed for those new to the area. Focus will be placed on ensuring the readers are aware of the impact of choices made at early stages in WIDS development on future stages.
Resumo:
Increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in Smart Grids potentially means greater susceptibility to malicious attackers. SCADA systems with legacy communication infrastructure have inherent cyber-security vulnerabilities as these systems were originally designed with little consideration of cyber threats. In order to improve cyber-security of SCADA networks, this paper presents a rule-based Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method, which includes signature-based and model-based approaches tailored for SCADA systems. The proposed signature-based rules can accurately detect several known suspicious or malicious attacks. In addition, model-based detection is proposed as a complementary method to detect unknown attacks. Finally, proposed intrusion detection approaches for SCADA networks are implemented and verified using a ruled based method.
Resumo:
Synchrophasor systems will play a crucial role in next generation Smart Grid monitoring, protection and control. However these systems also introduce a multitude of potential vulnerabilities from malicious and inadvertent attacks, which may render erroneous operation or severe damage. This paper proposes a Synchrophasor Specific Intrusion Detection System (SSIDS) for malicious cyber attack and unintended misuse. The SSIDS comprises a heterogeneous whitelist and behavior-based approach to detect known attack types and unknown and so-called ‘zero-day’ vulnerabilities and attacks. The paper describes reconnaissance, Man-in-the-Middle (MITM) and Denial-of-Service (DoS) attack types executed against a practical synchrophasor system which are used to validate the real-time effectiveness of the proposed SSIDS cyber detection method.
Resumo:
Increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in Smart Grids potentially means greater susceptibility to malicious attackers. SCADA systems with legacy communication infrastructure have inherent cyber-security vulnerabilities as these systems were originally designed with little consideration of cyber threats. In order to improve cyber-security of SCADA networks, this paper presents a rule-based Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method, which includes signature-based and model-based approaches tailored for SCADA systems. The proposed signature-based rules can accurately detect several known suspicious or malicious attacks. In addition, model-based detection is proposed as a complementary method to detect unknown attacks. Finally, proposed intrusion detection approaches for SCADA networks are implemented and verified via Snort rules.
Resumo:
The role of lymphoscintigraphy in sentinel node biopsy in breast cancer remains debatable. This study assesses the value of lymphoscintigraphy in axillary sentinel node biopsy in women undergoing surgery for breast cancer. Sixty-two patients underwent sentinel node biopsy using a combination of technetium-label led nanocolloid, lymphoscintigraphy and patent blue dye. Lymphoscintigraphy was successful in 84% of patients. Axillary sentinel nodes were identified intraoperatively in all these patients. Internal mammary nodes were identified on lymphoscintigraphy in 19%. Despite lymphoscintigraphy being unsuccessful in 10 patients, axillary sentinel nodes were found intraoperatively in eight of these patients. Lymphoscintigraphy did not increase the detection rate of axillary sentinel nodes and a negative scan did not preclude identification of an axillary sentinel node intraoperatively. This study questions the contribution of lymphoscintigraphy in axillary sentinel node biopsy, however its value may lie in the detection of extra-axillary nodes. (C) 2002 Published by Elsevier Science Ltd.
Resumo:
Immunomagnetic separation (IMS) represents a simple but effective method of selectively capturing and concentrating Mycobacterium bovis, the causative agent of bovine tuberculosis (bTB), from tissue samples. It is a physical cell separation technique that does not impact cell viability, unlike traditional chemical decontamination prior to culture. IMS is performed with paramagnetic beads coated with M. bovis-specific antibody and peptide binders. Once captured by IMS, M. bovis cells can be detected by either PCR or cultural detection methods. Increased detection rates of M. bovis, particularly from non-visibly lesioned lymph node tissues from bTB reactor animals, have recently been reported when IMS-based methods were employed.
Resumo:
While virtualisation can provide many benefits to a networks infrastructure, securing the virtualised environment is a big challenge. The security of a fully virtualised solution is dependent on the security of each of its underlying components, such as the hypervisor, guest operating systems and storage.
This paper presents a single security service running on the hypervisor that could potentially work to provide security service to all virtual machines running on the system. This paper presents a hypervisor hosted framework which performs specialised security tasks for all underlying virtual machines to protect against any malicious attacks by passively analysing the network traffic of VMs. This framework has been implemented using Xen Server and has been evaluated by detecting a Zeus Server setup and infected clients, distributed over a number of virtual machines. This framework is capable of detecting and identifying all infected VMs with no false positive or false negative detection.
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. The motivation for this research is to find a subset of Ngram features that makes a robust indicator of malware. The experiments within this paper represent programs as N-gram density histograms, gained through dynamic analysis. A Support Vector Machine (SVM) is used as the program classifier to determine the ability of N-grams to correctly determine the presence of malicious software. The preliminary findings show that an N-gram size N=3 and N=4 present the best avenues for further analysis.
Resumo:
Mobile malware has been growing in scale and complexity as smartphone usage continues to rise. Android has surpassed other mobile platforms as the most popular whilst also witnessing a dramatic increase in malware targeting the platform. A worrying trend that is emerging is the increasing sophistication of Android malware to evade detection by traditional signature-based scanners. As such, Android app marketplaces remain at risk of hosting malicious apps that could evade detection before being downloaded by unsuspecting users. Hence, in this paper we present an effective approach to alleviate this problem based on Bayesian classification models obtained from static code analysis. The models are built from a collection of code and app characteristics that provide indicators of potential malicious activities. The models are evaluated with real malware samples in the wild and results of experiments are presented to demonstrate the effectiveness of the proposed approach.
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.