252 resultados para pair propagator
Resumo:
Constant development of new wireless standards increases the demand for more radiating elements in compact end-user platforms. A decrease in antenna separation gives rise to increased antenna coupling, resulting in a reduction of the signal-to-interference-plus-noise-ratio (SINR) between transmitter and receiver. This paper proposes a decoupling network which provides dual band port isolation for a pair of distinct antennas. A prototype has been fabricated to verify the theory.
Resumo:
Proving security of cryptographic schemes, which normally are short algorithms, has been known to be time-consuming and easy to get wrong. Using computers to analyse their security can help to solve the problem. This thesis focuses on methods of using computers to verify security of such schemes in cryptographic models. The contributions of this thesis to automated security proofs of cryptographic schemes can be divided into two groups: indirect and direct techniques. Regarding indirect ones, we propose a technique to verify the security of public-key-based key exchange protocols. Security of such protocols has been able to be proved automatically using an existing tool, but in a noncryptographic model. We show that under some conditions, security in that non-cryptographic model implies security in a common cryptographic one, the Bellare-Rogaway model [11]. The implication enables one to use that existing tool, which was designed to work with a different type of model, in order to achieve security proofs of public-key-based key exchange protocols in a cryptographic model. For direct techniques, we have two contributions. The first is a tool to verify Diffie-Hellmanbased key exchange protocols. In that work, we design a simple programming language for specifying Diffie-Hellman-based key exchange algorithms. The language has a semantics based on a cryptographic model, the Bellare-Rogaway model [11]. From the semantics, we build a Hoare-style logic which allows us to reason about the security of a key exchange algorithm, specified as a pair of initiator and responder programs. The other contribution to the direct technique line is on automated proofs for computational indistinguishability. Unlike the two other contributions, this one does not treat a fixed class of protocols. We construct a generic formalism which allows one to model the security problem of a variety of classes of cryptographic schemes as the indistinguishability between two pieces of information. We also design and implement an algorithm for solving indistinguishability problems. Compared to the two other works, this one covers significantly more types of schemes, but consequently, it can verify only weaker forms of security.
Resumo:
Background Maize streak virus -strain A (MSV-A; Genus Mastrevirus, Family Geminiviridae), the maize-adapted strain of MSV that causes maize streak disease throughout sub-Saharan Africa, probably arose between 100 and 200 years ago via homologous recombination between two MSV strains adapted to wild grasses. MSV recombination experiments and analyses of natural MSV recombination patterns have revealed that this recombination event entailed the exchange of the movement protein - coat protein gene cassette, bounded by the two genomic regions most prone to recombination in mastrevirus genomes; the first surrounding the virion-strand origin of replication, and the second around the interface between the coat protein gene and the short intergenic region. Therefore, aside from the likely adaptive advantages presented by a modular exchange of this cassette, these specific breakpoints may have been largely predetermined by the underlying mechanisms of mastrevirus recombination. To investigate this hypothesis, we constructed artificial, low-fitness, reciprocal chimaeric MSV genomes using alternating genomic segments from two MSV strains; a grass-adapted MSV-B, and a maize-adapted MSV-A. Between them, each pair of reciprocal chimaeric genomes represented all of the genetic material required to reconstruct - via recombination - the highly maize-adapted MSV-A genotype, MSV-MatA. We then co-infected a selection of differentially MSV-resistant maize genotypes with pairs of reciprocal chimaeras to determine the efficiency with which recombination would give rise to high-fitness progeny genomes resembling MSV-MatA. Results Recombinants resembling MSV-MatA invariably arose in all of our experiments. However, the accuracy and efficiency with which the MSV-MatA genotype was recovered across all replicates of each experiment depended on the MSV susceptibility of the maize genotypes used and the precise positions - in relation to known recombination hotspots - of the breakpoints required to re-create MSV-MatA. Although the MSV-sensitive maize genotype gave rise to the greatest variety of recombinants, the measured fitness of each of these recombinants correlated with their similarity to MSV-MatA. Conclusions The mechanistic predispositions of different MSV genomic regions to recombination can strongly influence the accessibility of high-fitness MSV recombinants. The frequency with which the fittest recombinant MSV genomes arise also correlates directly with the escalating selection pressures imposed by increasingly MSV-resistant maize hosts.
Resumo:
The 'variety effect' describes the greater consumption that is observed when multiple foods with different sensory characteristics are presented either simultaneously or sequentially. Variety increases the amount of food consumed in test of ad libitum intake. However, outside the laboratory, meals are often planned in advance and then consumed in their entirety. We sought to explore the extent to which the variety effect is anticipated in this pre-meal planning. Participants were shown two food images, each representing a first or a second course of a hypothetical meal. The two courses were either, i) exactly the same food, ii) different foods from the same sensory category (sweet or savoury) or, iii) different foods from a different sensory category. In Study 1 (N = 30) these courses comprised typical ‘main meal’ foods and in Study 2 (N = 30) they comprised snack foods. For each pair of images, participants rated their expected liking of the second course and selected ideal portion sizes, both for the second course and the first and second course, combined. In both studies, as the difference between the courses (from (i) same to (ii) similar to (iii) different) increased, the second course was selected in a larger portion and it was rated as more pleasant. To our knowledge, these are the first studies to show that the variety effect is evident in the energy content of self-selected meals. This work shows that effects of variety are learned and anticipated. This extends our characterisation beyond a passive process that develops towards the end of a meal.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints,including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing a significant proportion of invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints, including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing most invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
We investigated the collaboration of ten doctor-nurse pairs with a prototype digital telehealth stethoscope. Doctors could see and hear the patient but could not touch them or the stethoscope. The nurse in each pair controlled the stethoscope. For ethical reasons, an experimenter stood in for a patient. Each of the ten interactions was video recorded and analysed to understand the interaction and collaboration between the doctor and nurse. The video recordings were coded and transformed into maps of interaction that were analysed for patterns of activity. The analysis showed that as doctors and nurses became more experienced at using the telehealth stethoscope their collaboration was more effective. The main measure of effectiveness was the number of corrections in stethoscope placement required by the doctor. In early collaborations, the doctors gave many corrections. After several trials, each doctor and nurse had reduced corrections and all pairs reduced their corrections. The significance of this research is the identification of the qualities of effective collaboration in the use of the telehealth stethoscope and telehealth systems more generally.
Resumo:
Secure communications in wireless sensor networks operating under adversarial conditions require providing pairwise (symmetric) keys to sensor nodes. In large scale deployment scenarios, there is no prior knowledge of post deployment network configuration since nodes may be randomly scattered over a hostile territory. Thus, shared keys must be distributed before deployment to provide each node a key-chain. For large sensor networks it is infeasible to store a unique key for all other nodes in the key-chain of a sensor node. Consequently, for secure communication either two nodes have a key in common in their key-chains and they have a wireless link between them, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path have a key in common. Length of the key-path is the key factor for efficiency of the design. This paper presents novel deterministic and hybrid approaches based on Combinatorial Design for deciding how many and which keys to assign to each key-chain before the sensor network deployment. In particular, Balanced Incomplete Block Designs (BIBD) and Generalized Quadrangles (GQ) are mapped to obtain efficient key distribution schemes. Performance and security properties of the proposed schemes are studied both analytically and computationally. Comparison to related work shows that the combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.
Resumo:
Well-designed initialisation and keystream generation processes for stream ciphers should ensure that each key-IV pair generates a distinct keystream. In this paper, we analyse some ciphers where this does not happen due to state convergence occurring either during initialisation, keystream generation or both. We show how state convergence occurs in each case and identify two mechanisms which can cause state convergence.
Resumo:
Courtney Pedersen and Charles Robb's A Natural History of Trees was a installation mounted at Blindside ARI in Melbourne's CBD in 2012. The work took the form of a pine-panelled room containing a pair of life-sized tree trunks composed entirely of stacks of cut paper discs. A faux bois stool reinforced the sense of artificiality. Claustrophic and precarious, the installation was simultaneously a response to the complexity of our relationship with nature and place, and an evocation of the precarious quality of the collaborative process. The exhibition was accompanied by a catalogue with an essay by writer/curator, Jane O'Neill.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focused recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the "best" choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy.
Resumo:
Secure communications between large number of sensor nodes that are randomly scattered over a hostile territory, necessitate efficient key distribution schemes. However, due to limited resources at sensor nodes such schemes cannot be based on post deployment computations. Instead, pairwise (symmetric) keys are required to be pre-distributed by assigning a list of keys, (a.k.a. key-chain), to each sensor node. If a pair of nodes does not have a common key after deployment then they must find a key-path with secured links. The objective is to minimize the keychain size while (i) maximizing pairwise key sharing probability and resilience, and (ii) minimizing average key-path length. This paper presents a deterministic key distribution scheme based on Expander Graphs. It shows how to map the parameters (e.g., degree, expansion, and diameter) of a Ramanujan Expander Graph to the desired properties of a key distribution scheme for a physical network topology.
Resumo:
Stochastic differential equations (SDEs) arise from physical systems where the parameters describing the system can only be estimated or are subject to noise. Much work has been done recently on developing higher order Runge-Kutta methods for solving SDEs numerically. Fixed stepsize implementations of numerical methods have limitations when, for example, the SDE being solved is stiff as this forces the stepsize to be very small. This paper presents a completely general variable stepsize implementation of an embedded Runge Kutta pair for solving SDEs numerically; in this implementation, there is no restriction on the value used for the stepsize, and it is demonstrated that the integration remains on the correct Brownian path.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.