342 resultados para Geometric attacks
Resumo:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
Resumo:
Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.
Resumo:
A crucial contemporary policy question for governments across the globe is how to cope with international crime and terrorist networks. Many such “dark” networks—that is, networks that operate covertly and illegally—display a remarkable level of resilience when faced with shocks and attacks. Based on an in-depth study of three cases (MK, the armed wing of the African National Congress in South Africa during apartheid; FARC, the Marxist guerrilla movement in Colombia; and the Liberation Tigers of Tamil Eelam, LTTE, in Sri Lanka), we present a set of propositions to outline how shocks impact dark network characteristics (resources and legitimacy) and networked capabilities (replacing actors, linkages, balancing integration and differentiation) and how these in turn affect a dark network's resilience over time. We discuss the implications of our findings for policymakers.
Resumo:
In this paper we consider the implementation of time and energy efficient trajectories onto a test-bed autonomous underwater vehicle. The trajectories are losely connected to the results of the application of the maximum principle to the controlled mechanical system. We use a numerical algorithm to compute efficient trajectories designed using geometric control theory to optimize a given cost function. Experimental results are shown for the time minimization problem.
Resumo:
In a post September 11 era “the fight”, as a cultural construct, could hardly be more pertinent. We are seemingly forever poised on the edge of controversial U.S. led attacks on wayward Middle Eastern states and unexamined oppositions between the concepts of ‘good’ and ‘evil’ are evoked as valid justifications for battle. Our leaders muster us into wars of vigilance and national cohesion against unseen, unknown and uncomprehended terrorists hiding where communists once lurked under our beds. The articles in this issue examine fights in terms of media strategies and cultural divides in a range of contexts.
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Resumo:
Natural convection in a triangular enclosure subject to non-uniformly cooling at the inclined surfaces and uniformly heating at the base is investigated numerically. The numerical simulations of the unsteady flows over a range of Rayleigh numbers and aspect ratios are carried out using Finite Volume Method. Since the upper surface is cooled and the bottom surface is heated, the air flow in the enclosure is potentially unstable to Rayleigh Benard instability. It is revealed that the transient flow development in the enclosure can be classified into three distinct stages; an early stage, a transitional stage and a steady stage. It is also found that the flow inside the enclosure strongly depends on the governing parameters, Rayleigh number and aspect ratio. The asymmetric behaviour of the flow about the geometric centre line is discussed in detailed. The heat transfer through the roof and the ceiling as a form of Nusselt number is also reported in this study.
Resumo:
The automated extraction of roads from aerial imagery can be of value for tasks including mapping, surveillance and change detection. Unfortunately, there are no public databases or standard evaluation protocols for evaluating these techniques. Many techniques are further hindered by a reliance on manual initialisation, making large scale application of the techniques impractical. In this paper, we present a public database and evaluation protocol for the evaluation of road extraction algorithms, and propose an improved automatic seed finding technique to initialise road extraction, based on a combination of geometric and colour features.
Resumo:
Just Fast Keying (JFK) is a simple, efficient and secure key exchange protocol proposed by Aiello et al. (ACM TISSEC, 2004). JFK is well known for its novel design features, notably its resistance to denial-of-service (DoS) attacks. Using Meadows’ cost-based framework, we identify a new DoS vulnerability in JFK. The JFK protocol is claimed secure in the Canetti-Krawczyk model under the Decisional Diffie-Hellman (DDH) assumption. We show that security of the JFK protocol, when reusing ephemeral Diffie-Hellman keys, appears to require the Gap Diffie-Hellman (GDH) assumption in the random oracle model. We propose a new variant of JFK that avoids the identified DoS vulnerability and provides perfect forward secrecy even under the DDH assumption, achieving the full security promised by the JFK protocol.
Resumo:
Various time-memory tradeoffs attacks for stream ciphers have been proposed over the years. However, the claimed success of these attacks assumes the initialisation process of the stream cipher is one-to-one. Some stream cipher proposals do not have a one-to-one initialisation process. In this paper, we examine the impact of this on the success of time-memory-data tradeoff attacks. Under the circumstances, some attacks are more successful than previously claimed while others are less. The conditions for both cases are established.
Resumo:
With the rise in attacks and attempted attacks on marine‐based critical infrastructure, maritime security is an issue of increasing importance worldwide. However, there are three significant shortfalls in the efforts to overcome potential threats to maritime security: the need for greater understanding of whether current standards of best practice are truly successful in combating and reducing the risks of terrorism and other security issues, the absence of a collective maritime security best practice framework and the need for improved access to maritime security specific graduate and postgraduate (long) courses. This paper presents an overview of existing international, regional national standards of best practice and shows that literature concerning the measurement and/ or success of standards is virtually non‐existent. In addition, despite the importance of maritime workers to ensuring the safety of marine based critical infrastructure, a similar review of available Australian education courses shows a considerable lack of availability of maritime security‐specific courses other than short courses that cover only basic security matters. We argue that the absence of an Australian best practice framework informed by evaluation of current policy responses – particularly in the post 9/11 environment – leaves Australia vulnerable to maritime security threats. As this paper shows, the reality is that despite the security measures put in place post 9/11, there is still considerable work to be done to ensure Australia is equipped to overcome the threats posed to maritime security.
Resumo:
Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
Crisis holds the potential for profound change in organizations and industries. The past 50 years of crisis management highlight key shifts in crisis practice, creating opportunities for multiple theories and research tracks. Defining crises such as Tylenol, Exxon Valdez, and September 11 terrorist attacks have influenced or challenged the principles of best practice of crisis communication in public relations. This study traces the development of crisis process and practice by identifying shifts in crisis research and models and mapping these against key management theories and practices. The findings define three crisis domains: crisis planning, building and testing predictive models, and mapping and measuring external environmental influences. These crisis domains mirror but lag the evolution of management theory, suggesting challenges for researchers to reshape the research agenda to close the gap and lead the next stage of development in the field of crisis communication for effective organizational outcomes.
Resumo:
In topological mapping, perceptual aliasing can cause different places to appear indistinguishable to the robot. In case of severely corrupted or non-available odometry information, topological mapping is difficult as the robot is challenged with the loop-closing problem; that is to determine whether it has visited a particular place before. In this article we propose to use neighbourhood information to disambiguate otherwise indistinguishable places. Using neighbourhood information for place disambiguation is an approach that neither depends on a specific choice of sensors nor requires geometric information such as odometry. Local neighbourhood information is extracted from a sequence of observations of visited places. In experiments using either sonar or visual observations from an indoor environment the benefits of using neighbourhood clues for the disambiguation of otherwise identical vertices are demonstrated. Over 90% of the maps we obtain are isomorphic with the ground truth. The choice of the robot’s sensors does not impact the results of the experiments much.