918 resultados para Algebraic attack
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.
Resumo:
Airports worldwide represent key forms of critical infrastructure in addition to serving as nodes in the international aviation network. While the continued operation of airports is critical to the functioning of reliable air passenger and freight transportation, these infrastructure systems face a number of sources of disturbance that threaten their operational viability. Recent examples of high magnitude events include the eruption of Iceland’s Eyjafjallajokull volcano eruption (Folattau and Schofield 2010), the failure of multiple systems at the opening of Heathrow’s Terminal 5 (Brady and Davies 2010) and the Glasgow airport 2007 terrorist attack (Crichton 2008). While these newsworthy events do occur, a multitude of lower-level more common disturbances also have the potential to cause significant discontinuity to airport operations. Regional airports face a unique set of challenges, particularly in a nation like Australia where they serve to link otherwise remote and isolated communities to metropolitan hubs (Wheeler 2005), often without the resources and political attention received by larger capital city airports. This paper discusses conceptual relationships between Business Continuity Management (BCM) and High Reliability Theory, and proposes BCM as an appropriate risk-based management process to ensure continued airport operation in the face of uncertainty. In addition, it argues that that correctly implemented BCM can lead to highly reliable organisations. This is framed within the broader context of critical infrastructures and the need for adequate crisis management approaches suited to their unique requirements (Boin and McConnell 2007).
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Key establishment is a crucial cryptographic primitive for building secure communication channels between two parties in a network. It has been studied extensively in theory and widely deployed in practice. In the research literature a typical protocol in the public-key setting aims for key secrecy and mutual authentication. However, there are many important practical scenarios where mutual authentication is undesirable, such as in anonymity networks like Tor, or is difficult to achieve due to insufficient public-key infrastructure at the user level, as is the case on the Internet today. In this work we are concerned with the scenario where two parties establish a private shared session key, but only one party authenticates to the other; in fact, the unauthenticated party may wish to have strong anonymity guarantees. We present a desirable set of security, authentication, and anonymity goals for this setting and develop a model which captures these properties. Our approach allows for clients to choose among different levels of authentication. We also describe an attack on a previous protocol of Øverlier and Syverson, and present a new, efficient key exchange protocol that provides one-way authentication and anonymity.
Resumo:
A self-escrowed public key infrastructure (SE-PKI) combines the usual functionality of a public-key infrastructure with the ability to recover private keys given some trap-door information. We present an additively homomorphic variant of an existing SE-PKI for ElGamal encryption. We also propose a new efficient SE-PKI based on the ElGamal and Okamoto-Uchiyama cryptosystems that is more efficient than the previous SE-PKI. This is the first SE-PKI that does not suffer from a key doubling problem of previous SE-PKI proposals. Additionally, we present the first self-escrowed encryption schemes secure against chosen-ciphertext attack in the standard model. These schemes are also quite efficient and are based on the Cramer-Shoup cryptosystem, and the Kurosawa-Desmedt hybrid variant in different groups.
Resumo:
The photocatalytic disinfection of Enterobacter cloacae and Enterobacter coli using microwave (MW), convection hydrothermal (HT) and Degussa P25 titania was investigated in suspension and immobilized reactors. In suspension reactors, MW-treated TiO(2) was the most efficient catalyst (per unit weight of catalyst) for the disinfection of E. cloacae. However, HT-treated TiO(2) was approximately 10 times more efficient than MW or P25 titania for the disinfection of E. coli suspensions in surface water using the immobilized reactor. In immobilized experiments, using surface water a significant amount of photolysis was observed using the MW- and HT-treated films; however, disinfection on P25 films was primarily attributed to photocatalysis. Competitive action of inorganic ions and humic substances for hydroxyl radicals during photocatalytic experiments, as well as humic substances physically screening the cells from UV and hydroxyl radical attack resulted in low rates of disinfection. A decrease in colony size (from 1.5 to 0.3 mm) was noted during photocatalytic experiments. The smaller than average colonies were thought to occur during sublethal (•) OH and O(2) (•-) attack. Catalyst fouling was observed following experiments in surface water and the ability to regenerate the surface was demonstrated using photocatalytic degradation of oxalic acid as a model test system
Resumo:
Blends of lignin and poly(hydroxybutyrate) (PHB) were obtained by melt extrusion. They were buried in a garden soil for up to 12 months, and the extent and mechanism of degradation were investigated by gravimetric analysis, thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM) and Fourier transform infra-red spectroscopy (FTIR) over the entire range of compositions. The PHB films were disintegrated and lost 45 wt% of mass within 12 months. This value dropped to 12 wt% of mass when only 10 wt% of lignin was present, suggesting that lignin both inhibited and slowed down the rate of PHB degradation. TGA and DSC indicated structural changes, within the lignin/PHB matrix, with burial time, while FTIR results confirmed the fragmentation of the PHB polymer. XPS revealed an accumulation of biofilms on the surface of buried samples, providing evidence of a biodegradation mechanism. Significant surface roughness was observed with PHB films due to microbial attack caused by both loosely and strongly associated micro-organisms. The presence of lignin in the blends may have inhibited the colonisation of the micro-organisms and caused the blends to be more resistant to microbial attack. Analysis suggested that lignin formed strong hydrogen bonds with PHB in the buried samples and it is likely that the rate of breakdown of PHB is reduced, preventing rapid degradation of the blends.
Resumo:
Non-state insurgent actors are too weak to compel powerful adversaries to their will, so they use violence to coerce. A principal objective is to grow and sustain violent resistance to the point that it either militarily challenges the state, or more commonly, generates unacceptable political costs. To survive, insurgents must shift popular support away from the state and to grow they must secure it. State actor policies and actions perceived as illegitimate and oppressive by the insurgent constituency can generate these shifts. A promising insurgent strategy is to attack states in ways that lead angry publics and leaders to discount the historically established risks and take flawed but popular decisions to use repressive measures. Such decisions may be enabled by a visceral belief in the power of coercion and selective use of examples of where robust measures have indeed suppressed resistance. To avoid such counterproductive behaviours the cases of apparent 'successful repression' must be understood. This thesis tests whether robust state action is correlated with reduced support for insurgents, analyses the causal mechanisms of such shifts and examines whether such reduction is because of compulsion or coercion? The approach is founded on prior research by the RAND Corporation which analysed the 30 insurgencies most recently resolved worldwide to determine factors of counterinsurgent success. This new study first re-analyses their data at a finer resolution with new queries that investigate the relationship between repression and insurgent active support. Having determined that, in general, repression does not correlate with decreased insurgent support, this study then analyses two cases in which the data suggests repression seems likely to be reducing insurgent support: the PKK in Turkey and the insurgency against the Vietnamese-sponsored regime after their ousting of the Khmer Rouge. It applies 'structured-focused' case analysis with questions partly built from the insurgency model of Leites and Wolf, who are associated with the advocacy of US robust means in Vietnam. This is thus a test of 'most difficult' cases using a 'least likely' test model. Nevertheless, the findings refute the deterrence argument of 'iron fist' advocates. Robust approaches may physically prevent effective support of insurgents but they do not coercively deter people from being willing to actively support the insurgency.
Resumo:
This paper presents a model for generating a MAC tag with a stream cipher using the input message indirectly. Several recent proposals represent instances of this model with slightly different options. We investigate the security of this model for different options, and identify cases which permit forgery attacks. Based on this, we present a new forgery attack on version 1.4 of 128-EIA3. Design recommendations to enhance the security of proposals following this general model are given.
Resumo:
12.1 Drugs for hypertension 12.1.1 Epidemiology and pathophysiology 12.1.2 Diuretics for hypertension 12.2.3 Vasodilators for hypertension 12.4.4 β-Adrenoceptor blockers for hypertension 12.2. Drugs for angina 12.2.1 Typical angina 12.2.2 Drugs to treat an attack of typical angina 12,2.3 Drugs to prevent an attack of typical angina 12.2.4 Atypical angina 12.3 Drugs for heart failure 12.3.1 The heart failure epidemic 12.3.2 Compensatory changes in heart failure 12.3.3 Diuretics for heart failure 12.3.4 ACE inhibitors and AT1-receptor antagonists 12.3.5 β-adrenoceptor antagonists 12.3.6 Digoxin
Resumo:
Authigenic illite-smectite and chlorite in reservoir sandstones from several Pacific rim sedimentary basins in Australia and New Zealand have been examined using an Electroscan Environmental Scanning Electron Microscope (ESEM) before, during, and after treatment with fresh water and HCl, respectively. These dynamic experiments are possible in the ESEM because, unlike conventional SEMs that require a high vacuum in the sample chamber (10-6 torr), the ESEM will operate at high pressures up to 20 torr. This means that materials and processes can be examined at high magnifications in their natural states, wet or dry, and over a range of temperatures (-20 to 1000 degrees C) and pressures. Sandstones containing the illite-smectite (60-70% illite interlayers) were flushed with fresh water for periods of up to 12 hours. Close examination of the same illite-smectite lines or filled pores, both before and after freshwater treatments, showed that the morphology of the illite-smectite was not changed by prolonged freshwater treatment. Chlorite-bearing sandstones (Fe-rich chlorite) were reacted with 1M to 10M HCl at temperatures of up to 80 degrees C and for periods of up to 48 hours. Before treatment the chlorites showed typically platy morphologies. After HCl treatment the chlorite grains were coated with an amorphous gel composed of Ca, Cl, and possibly amorphous Si, as determined by EDS analyses on the freshly treated rock surface. Brief washing in water removed this surface coating and revealed apparently unchanged chlorite showing no signs of dissolution or acid attack. However, although the chlorite showed no morphological changes, elemental analysis only detected silicon and oxygen.
Resumo:
NeSSi (network security simulator) is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects.
Resumo:
Sfinks is a shift register based stream cipher designed for hardware implementation and submitted to the eSTREAM project. In this paper, we analyse the initialisation process of Sfinks. We demonstrate a slid property of the loaded state of the Sfinks cipher, where multiple key-IV pairs may produce phase shifted keystream sequences. The state update functions of both the initialisation process and keystream generation and also the pattern of the padding affect generation of the slid pairs.