918 resultados para Algebraic attack
Resumo:
The effect of HCl on authigenic chlorite in three different sandstones has been examined uisng an Environmental Scanning Electron Microscope (ESEM), together with conventional analytical techniques. The ESEM enabled chlorites to be directly observed in situ at high magnifications during HCl treatment, and was particularly effective in allowing the same chlorite areas to be closely compared before and after acid treatment. Chlorites were reacted with 1M to 10M HCl at temperatures up to 80°C and for periods up to five months. After all treatments, chlorites show extensive leaching of iron, magnesium and aluminum, and their crystalline structure is destroyed. However, despite these major compositional and structural changes, chlorites show little or no visible evidence of acid attack, with precise morphological detail of individual plates preserved in all samples following acid treatments. Chlorite dissolution, sensu stricto, did not occur as a result of acidization of the host sandstones. Acid-treated chlorides are likely to exits in a structurally weakened state that may make them susceptible to physical disintegration during fluid flow. Accordingly, fines migration may be a significant engineering problem associated with the acidization of chlorite-bearing sandstones. © 1993.
Resumo:
A5/1 is a shift register based stream cipher which uses a majority clocking rule to update its registers. It is designed to provide privacy for the GSM system. In this paper, we analyse the initialisation process of A5/1. We demonstrate a sliding property of the A5/1 cipher, where every valid internal state is also a legitimate loaded state and multiple key-IV pairs produce phase shifted keystream sequences. We describe a possible ciphertext only attack based on this property.
Resumo:
The Pattern and Structure Mathematics Awareness Project (PASMAP) has investigated the development of patterning and early algebraic reasoning among 4 to 8 year olds over a series of related studies. We assert that an awareness of mathematical pattern and structure enables mathematical thinking and simple forms of generalisation from an early age. The project aims to promote a strong foundation for mathematical development by focusing on critical, underlying features of mathematics learning. This paper provides an overview of key aspects of the assessment and intervention, and analyses of the impact of PASMAP on students’ representation, abstraction and generalisation of mathematical ideas. A purposive sample of four large primary schools, two in Sydney and two in Brisbane, representing 316 students from diverse socio-economic and cultural contexts, participated in the evaluation throughout the 2009 school year and a follow-up assessment in 2010. Two different mathematics programs were implemented: in each school, two Kindergarten teachers implemented the PASMAP and another two implemented their regular program. The study shows that both groups of students made substantial gains on the ‘I Can Do Maths’ assessment and a Pattern and Structure Assessment (PASA) interview, but highly significant differences were found on the latter with PASMAP students outperforming the regular group on PASA scores. Qualitative analysis of students’ responses for structural development showed increased levels for the PASMAP students; those categorised as low ability developed improved structural responses over a relatively short period of time.
Resumo:
A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.
Resumo:
The steady problem of free surface flow due to a submerged line source is revisited for the case in which the fluid depth is finite and there is a stagnation point on the free surface directly above the source. Both the strength of the source and the fluid speed in the far field are measured by a dimensionless parameter, the Froude number. By applying techniques in exponential asymptotics, it is shown that there is a train of periodic waves on the surface of the fluid with an amplitude which is exponentially small in the limit that the Froude number vanishes. This study clarifies that periodic waves do form for flows due to a source, contrary to a suggestion by Chapman & Vanden-Broeck (2006, J. Fluid Mech., 567, 299--326). The exponentially small nature of the waves means they appear beyond all orders of the original power series expansion; this result explains why attempts at describing these flows using a finite number of terms in an algebraic power series incorrectly predict a flat free surface in the far field.
Resumo:
Smartphones started being targets for malware in June 2004 while malware count increased steadily until the introduction of a mandatory application signing mechanism for Symbian OS in 2006. From this point on, only few news could be read on this topic. Even despite of new emerging smartphone platforms, e.g. android and iPhone, malware writers seemed to lose interest in writing malware for smartphones giving users an unappropriate feeling of safety. In this paper, we revisit smartphone malware evolution for completing the appearance list until end of 2008. For contributing to smartphone malware research, we continue this list by adding descriptions on possible techniques for creating the first malware(s) for Android platform. Our approach involves usage of undocumented Android functions enabling us to execute native Linux application even on retail Android devices. This can be exploited to create malicious Linux applications and daemons using various methods to attack a device. In this manner, we also show that it is possible to bypass the Android permission system by using native Linux applications.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
In urban locations in Australia and elsewhere, public space may be said to be under attack from developers and also from attempts by civic authorities to oversee and control it (Davis 1995, Mitchell 2003, Watson 2006, Iveson 2006). The use of public space use by young people in particular, raises issues in Australia and elsewhere in the world. In a context of monitoring and control procedures, young people’s use of public space is often viewed as a threat to the prevailing social order (Loader 1996, White 1998, Crane and Dee 2001). This paper discusses recent technological developments in the surveillance, governance and control of public space used by young people, children and people of all ages.
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
This paper presents a nonlinear gust-attenuation controller based on constrained neural-network (NN) theory. The controller aims to achieve sufficient stability and handling quality for a fixed-wing unmanned aerial system (UAS) in a gusty environment when control inputs are subjected to constraints. Constraints in inputs emulate situations where aircraft actuators fail requiring the aircraft to be operated with fail-safe capability. The proposed controller enables gust-attenuation property and stabilizes the aircraft dynamics in a gusty environment. The proposed flight controller is obtained by solving the Hamilton-Jacobi-Isaacs (HJI) equations based on an policy iteration (PI) approach. Performance of the controller is evaluated using a high-fidelity six degree-of-freedom Shadow UAS model. Simulations show that our controller demonstrates great performance improvement in a gusty environment, especially in angle-of-attack (AOA), pitch and pitch rate. Comparative studies are conducted with the proportional-integral-derivative (PID) controllers, justifying the efficiency of our controller and verifying its suitability for integration into the design of flight control systems for forced landing of UASs.
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
Detailed mechanisms for the formation of hydroxyl or alkoxyl radicals in the reactions between tetrachloro-p-benzoquinone (TCBQ) and organic hydroperoxides are crucial for better understanding the potential carcinogenicity of polyhalogenated quinones. Herein, the mechanism of the reaction between TCBQ and H2O2 has been systematically investigated at the B3LYP/6-311++G** level of theory in the presence of different numbers of water molecules. We report that the whole reaction can easily take place with the assistance of explicit water molecules. Namely, an initial intermediate is formed first. After that, a nucleophilic attack of H2O2 onto TCBQ occurs, which results in the formation of a second intermediate that contains an OOH group. Subsequently, this second intermediate decomposes homolytically through cleavage of the O-O bond to produce a hydroxyl radical. Energy analyses suggest that the nucleophilic attack is the rate-determining step in the whole reaction. The participation of explicit water molecules promotes the reaction significantly, which can be used to explain the experimental phenomena. In addition, the effects of F, Br, and CH3 substituents on this reaction have also been studied.