279 resultados para mimicking attack


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article analyses the legality of Israel’s 2007 airstrike on an alleged Syrian nuclear facility at Al-Kibar—an incident that has been largely overlooked by international lawyers to date. The absence of a threat of imminent attack from Syria means Israel’s military action was not a lawful exercise of anticipatory self-defence. Yet, despite Israel’s clear violation of the prohibition on the use of force there was remarkably little condemnation from other states, suggesting the possibility of growing international support for the doctrine of pre-emptive self-defence. This article argues that the muted international reaction to Israel’s pre-emptive action was the result of political factors, and should not be seen as endorsement of the legality of the airstrike. As such, a lack of opinio juris means the Al-Kibar episode cannot be viewed as extending the scope of the customary international law right of self-defence so as to permit the use of force against non-imminent threats. However, two features of this incident—namely, Israel’s failure to offer any legal justification for its airstrike, and the international community’s apparent lack of concern over legality—are also evident in other recent uses of force in the ‘war on terror’ context. These developments may indicate a shift in state practice involving a downgrading of the role of international law in discussions of the use of force. This may signal a declining perception of the legitimacy of the jus ad bellum, at least in cases involving minor uses of force.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airports worldwide represent key forms of critical infrastructure in addition to serving as nodes in the international aviation network. While the continued operation of airports is critical to the functioning of reliable air passenger and freight transportation, these infrastructure systems face a number of sources of disturbance that threaten their operational viability. Recent examples of high magnitude events include the eruption of Iceland’s Eyjafjallajokull volcano eruption (Folattau and Schofield 2010), the failure of multiple systems at the opening of Heathrow’s Terminal 5 (Brady and Davies 2010) and the Glasgow airport 2007 terrorist attack (Crichton 2008). While these newsworthy events do occur, a multitude of lower-level more common disturbances also have the potential to cause significant discontinuity to airport operations. Regional airports face a unique set of challenges, particularly in a nation like Australia where they serve to link otherwise remote and isolated communities to metropolitan hubs (Wheeler 2005), often without the resources and political attention received by larger capital city airports. This paper discusses conceptual relationships between Business Continuity Management (BCM) and High Reliability Theory, and proposes BCM as an appropriate risk-based management process to ensure continued airport operation in the face of uncertainty. In addition, it argues that that correctly implemented BCM can lead to highly reliable organisations. This is framed within the broader context of critical infrastructures and the need for adequate crisis management approaches suited to their unique requirements (Boin and McConnell 2007).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Key establishment is a crucial cryptographic primitive for building secure communication channels between two parties in a network. It has been studied extensively in theory and widely deployed in practice. In the research literature a typical protocol in the public-key setting aims for key secrecy and mutual authentication. However, there are many important practical scenarios where mutual authentication is undesirable, such as in anonymity networks like Tor, or is difficult to achieve due to insufficient public-key infrastructure at the user level, as is the case on the Internet today. In this work we are concerned with the scenario where two parties establish a private shared session key, but only one party authenticates to the other; in fact, the unauthenticated party may wish to have strong anonymity guarantees. We present a desirable set of security, authentication, and anonymity goals for this setting and develop a model which captures these properties. Our approach allows for clients to choose among different levels of authentication. We also describe an attack on a previous protocol of Øverlier and Syverson, and present a new, efficient key exchange protocol that provides one-way authentication and anonymity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing algebraic analyses of the ZUC cipher indicate that the cipher should be secure against algebraic attacks. In this paper, we present an alternative algebraic analysis method for the ZUC stream cipher, where a combiner is used to represent the nonlinear function and to derive equations representing the cipher. Using this approach, the initial states of ZUC can be recovered from 2^97 observed words of keystream, with a complexity of 2^282 operations. This method is more successful when applied to a modified version of ZUC, where the number of output words per clock is increased. If the cipher outputs 120 bits of keystream per clock, the attack can succeed with 219 observed keystream bits and 2^47 operations. Therefore, the security of ZUC against algebraic attack could be significantly reduced if its throughput was to be increased for efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Both the SSS and SOBER-t32 stream cipher designs use a single word-based shift register and a nonlinear filter function to produce keystream. In this paper we show that the algebraic attack method previously applied to SOBER-t32 is prevented from succeeding on SSS by the use of the key dependent substitution box (SBox) in the nonlinear filter of SSS. Additional assumptions and modifications to the SSS cipher in an attempt to enable algebraic analysis result in other difficulties that also render the algebraic attack infeasible. Based on these results, we conclude that a well chosen key-dependent substitution box used in the nonlinear filter of the stream cipher provides resistance against such algebraic attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A self-escrowed public key infrastructure (SE-PKI) combines the usual functionality of a public-key infrastructure with the ability to recover private keys given some trap-door information. We present an additively homomorphic variant of an existing SE-PKI for ElGamal encryption. We also propose a new efficient SE-PKI based on the ElGamal and Okamoto-Uchiyama cryptosystems that is more efficient than the previous SE-PKI. This is the first SE-PKI that does not suffer from a key doubling problem of previous SE-PKI proposals. Additionally, we present the first self-escrowed encryption schemes secure against chosen-ciphertext attack in the standard model. These schemes are also quite efficient and are based on the Cramer-Shoup cryptosystem, and the Kurosawa-Desmedt hybrid variant in different groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The photocatalytic disinfection of Enterobacter cloacae and Enterobacter coli using microwave (MW), convection hydrothermal (HT) and Degussa P25 titania was investigated in suspension and immobilized reactors. In suspension reactors, MW-treated TiO(2) was the most efficient catalyst (per unit weight of catalyst) for the disinfection of E. cloacae. However, HT-treated TiO(2) was approximately 10 times more efficient than MW or P25 titania for the disinfection of E. coli suspensions in surface water using the immobilized reactor. In immobilized experiments, using surface water a significant amount of photolysis was observed using the MW- and HT-treated films; however, disinfection on P25 films was primarily attributed to photocatalysis. Competitive action of inorganic ions and humic substances for hydroxyl radicals during photocatalytic experiments, as well as humic substances physically screening the cells from UV and hydroxyl radical attack resulted in low rates of disinfection. A decrease in colony size (from 1.5 to 0.3 mm) was noted during photocatalytic experiments. The smaller than average colonies were thought to occur during sublethal (•) OH and O(2) (•-) attack. Catalyst fouling was observed following experiments in surface water and the ability to regenerate the surface was demonstrated using photocatalytic degradation of oxalic acid as a model test system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blends of lignin and poly(hydroxybutyrate) (PHB) were obtained by melt extrusion. They were buried in a garden soil for up to 12 months, and the extent and mechanism of degradation were investigated by gravimetric analysis, thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM) and Fourier transform infra-red spectroscopy (FTIR) over the entire range of compositions. The PHB films were disintegrated and lost 45 wt% of mass within 12 months. This value dropped to 12 wt% of mass when only 10 wt% of lignin was present, suggesting that lignin both inhibited and slowed down the rate of PHB degradation. TGA and DSC indicated structural changes, within the lignin/PHB matrix, with burial time, while FTIR results confirmed the fragmentation of the PHB polymer. XPS revealed an accumulation of biofilms on the surface of buried samples, providing evidence of a biodegradation mechanism. Significant surface roughness was observed with PHB films due to microbial attack caused by both loosely and strongly associated micro-organisms. The presence of lignin in the blends may have inhibited the colonisation of the micro-organisms and caused the blends to be more resistant to microbial attack. Analysis suggested that lignin formed strong hydrogen bonds with PHB in the buried samples and it is likely that the rate of breakdown of PHB is reduced, preventing rapid degradation of the blends.