963 resultados para Malicious Node Detection
Resumo:
Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system.
Resumo:
Securing IT infrastructures of our modern lives is a challenging task because of their increasing complexity, scale and agile nature. Monolithic approaches such as using stand-alone firewalls and IDS devices for protecting the perimeter cannot cope with complex malwares and multistep attacks. Collaborative security emerges as a promising approach. But, research results in collaborative security are not mature, yet, and they require continuous evaluation and testing. In this work, we present CIDE, a Collaborative Intrusion Detection Extension for the network security simulation platform ( NeSSi 2 ). Built-in functionalities include dynamic group formation based on node preferences, group-internal communication, group management and an approach for handling the infection process for malware-based attacks. The CIDE simulation environment provides functionalities for easy implementation of collaborating nodes in large-scale setups. We evaluate the group communication mechanism on the one hand and provide a case study and evaluate our collaborative security evaluation platform in a signature exchange scenario on the other.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
This paper presents a formal methodology for attack modeling and detection for networks. Our approach has three phases. First, we extend the basic attack tree approach 1 to capture (i) the temporal dependencies between components, and (ii) the expiration of an attack. Second, using the enhanced attack trees (EAT) we build a tree automaton that accepts a sequence of actions from input stream if there is a traverse of an attack tree from leaves to the root node. Finally, we show how to construct an enhanced parallel automaton (EPA) that has each tree automaton as a subroutine and can process the input stream by considering multiple trees simultaneously. As a case study, we show how to represent the attacks in IEEE 802.11 and construct an EPA for it.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways, e.g. for payment systems or assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners privacy. Since signature-based approaches mainly detect known malwares, anomaly-based approaches can be a valuable addition to these systems. They base on mathematical algorithms processing data that describe the state of a certain device. For gaining this data, a monitoring client is needed that has to extract usable information (features) from the monitored system. Our approach follows a dual system for analyzing these features. On the one hand, functionality for on-device light-weight detection is provided. But since most algorithms are resource exhaustive, remote feature analysis is provided on the other hand. Having this dual system enables event-based detection that can react to the current detection need. In our ongoing research we aim to investigates the feasibility of light-weight on-device detection for certain occasions. On other occasions, whenever significant changes are detected on the device, the system can trigger remote detection with heavy-weight algorithms for better detection results. In the absence of the server respectively as a supplementary approach, we also consider a collaborative scenario. Here, mobile devices sharing a common objective are enabled by a collaboration module to share information, such as intrusion detection data and results. This is based on an ad-hoc network mode that can be provided by a WiFi or Bluetooth adapter nearly every smartphone possesses.
Resumo:
We propose the use of optical flow information as a method for detecting and describing changes in the environment, from the perspective of a mobile camera. We analyze the characteristics of the optical flow signal and demonstrate how robust flow vectors can be generated and used for the detection of depth discontinuities and appearance changes at key locations. To successfully achieve this task, a full discussion on camera positioning, distortion compensation, noise filtering, and parameter estimation is presented. We then extract statistical attributes from the flow signal to describe the location of the scene changes. We also employ clustering and dominant shape of vectors to increase the descriptiveness. Once a database of nodes (where a node is a detected scene change) and their corresponding flow features is created, matching can be performed whenever nodes are encountered, such that topological localization can be achieved. We retrieve the most likely node according to the Mahalanobis and Chi-square distances between the current frame and the database. The results illustrate the applicability of the technique for detecting and describing scene changes in diverse lighting conditions, considering indoor and outdoor environments and different robot platforms.
Resumo:
The purpose of this study was to evaluate the use of sentinel node biopsy (SNB) in the axillary nodal staging in breast cancer. A special interest was in sentinel node (SN) visualization, intraoperative detection of SN metastases, the feasibility of SNB in patients with pure tubular carcinoma (PTC) and in those with ductal carcinoma in situ (DCIS) in core needle biopsy (CNB) and additionally in the detection of axillary recurrences after tumour negative SNB. Patients and methods. 1580 clinically stage T1-T2 node-negative breast cancer patients, who underwent lymphoscintigraphy (LS), SNB and breast surgery between June 2000 - 2004 at the Breast Surgery Unit. The CNB samples were obtained from women, who participated the biennial, population based mammography screening at the Mammography Screening Centre of Helsinki 2001 - 2004.In the follow- up, a cohort of 205 patients who avoided AC due to negative SNB findings were evaluated using ultrasonography one and three years after breast surgery. Results. The visualization rate of axillary SNs was not enhanced by adjusting radioisotope doses according to BMI. The sensitivity of the intraoperative diagnosis of SN metastases of invasive lobular carcinoma (ILC) was higher, 87%, with rapid, intraoperative immunohistochemistry (IHC) group compared to 66% without it. The prevalence of tumour positive SN findings was 27% in the 33 patients with breast tumours diagnosed as PTC. The median histological tumour size was similar in patients with or without axillary metastases. After the histopathological review, six out of 27 patients with true PTC had axillary metastases, with no significant change in the risk factors for axillary metastases. Of the 67 patients with DCIS in the preoperative percutaneous biopsy specimen , 30% had invasion in the surgical specimen. The strongest predictive factor for invasion was the visibility of the lesion in ultrasound. In the three year follow-up, axillary recurrence was found in only two (0.5%) of the total of 383 ultrasound examinations performed during the study, and only one of the 369 examinations revealed cancer. None of the ultrasound examinations were false positive, and no study participant was subjected to unnecessary surgery due to ultrasound monitoring. Conclusions. Adjusting the dose of the radioactive tracer according to patient BMI does not increase the visualization rate of SNs. The intraoperative diagnosis of SN metastases is enhanced by rapid IHC particularly in patients with ILC. SNB seems to be a feasible method for axillary staging of pure tubular carcinoma in patients with a low prevalence of axillary metatastases. SNB also appears to be a sensible method in patients undergoing mastectomy due to DCIS in CNB. It also seems useful in patients with lesions visible in breast US. During follow-up, routine monitoring of the ipsilateral axilla using US is not worthwhile among breast cancer patients who avoided AC due to negative SN findings.
Resumo:
We study the problem of decentralized sequential change detection with conditionally independent observations. The sensors form a star topology with a central node called fusion center as the hub. The sensors transmit a simple function of their observations in an analog fashion over a wireless Gaussian multiple access channel and operate under either a power constraint or an energy constraint. Simulations demonstrate that the proposed techniques have lower detection delays when compared with existing schemes. Moreover we demonstrate that the energy-constrained formulation enables better use of the total available energy than a power-constrained formulation.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
A highly sensitive and specific reverse transcription polymerase chain reaction enzyme linked immunosorbent assay (RT-PCR-ELISA) was developed for the objective detection of nucleoprotein (N) gene of peste des petits ruminants (PPR) virus from field outbreaks or experimentally infected sheep. Two primers (IndF and Np4) and one probe (Sp3) available or designed for the amplification/probing of the 'N' gene of PPR virus, were chosen for labeling and use in RT-PCR-ELISA based on highest analytical sensitivity of detection of infective virus or N-gene containing recombinant plasmid, higher nucleotide homology at the primer binding sites of the 'N' gene sequences available and the ability to amplify PPR viral genome from different sources of samples. RT-PCR was performed with unlabeled IndF and Np4 digoxigenin labeled primers followed by a microplate hybridization probe reaction with biotin labeled Sp3 probe. RT-PCR-ELISA was found to be 10-fold more sensitive than the conventional RT-PCR followed by agarose gel based detection of PCR product. Based on the Mean (mean +/- 3S.D.) optical density (OD) values of 47 RT-PCR negative samples, OD values above 0.306 were considered positive in RT-PCR-ELISA. A total of 82 oculo-nasal swabs and tissue samples from suspected PPR cases were analyzed by RT-PCR and RT-PCR-ELISA, which revealed 54.87 and 58.54% positivity, respectively. From an experimentally infected sheep, both RT-PCR and RT-PCR-ELISA could detect the virus from 6 days post-infection up to 9 days in oculo-nasal swabs. On post-mortem, PPR viral genome was detected in spleen, lymph node, lung, heart and liver. The correlation co-efficient between RT-PCR-ELISA OD values and either TCID50 of virus or molecules of DNA was 0.622 and 0.657, respectively. The advantages of RT-PCR-ELISA over the conventional agarose gel based detection of RT-PCR products are discussed. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
The problem of intrusion detection and location identification in the presence of clutter is considered for a hexagonal sensor-node geometry. It is noted that in any practical application,for a given fixed intruder or clutter location, only a small number of neighboring sensor nodes will register a significant reading. Thus sensing may be regarded as a local phenomenon and performance is strongly dependent on the local geometry of the sensor nodes. We focus on the case when the sensor nodes form a hexagonal lattice. The optimality of the hexagonal lattice with respect to density of packing and covering and largeness of the kissing number suggest that this is the best possible arrangement from a sensor network viewpoint. The results presented here are clearly relevant when the particular sensing application permits a deterministic placement of sensors. The results also serve as a performance benchmark for the case of a random deployment of sensors. A novel feature of our analysis of the hexagonal sensor grid is a signal-space viewpoint which sheds light on achievable performance.Under this viewpoint, the problem of intruder detection is reduced to one of determining in a distributed manner, the optimal decision boundary that separates the signal spaces SI and SC associated to intruder and clutter respectively. Given the difficulty of implementing the optimal detector, we present a low-complexity distributive algorithm under which the surfaces SI and SC are separated by a wellchosen hyperplane. The algorithm is designed to be efficient in terms of communication cost by minimizing the expected number of bits transmitted by a sensor.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Motivated by the observation that communities in real world social networks form due to actions of rational individuals in networks, we propose a novel game theory inspired algorithm to determine communities in networks. The algorithm is decentralized and only uses local information at each node. We show the efficacy of the proposed algorithm through extensive experimentation on several real world social network data sets.