874 resultados para finance-based schemes
Resumo:
Performance based planning is a form of planning regulation that is not well understood and the theoretical advantages of this type of planning are rarely achieved in practice. Normatively, this type of regulation relies on performance standards that are quantifiable and technically based which are designed to manage the effects of development, where performance standards provide certainty in respect of the level of performance and the means of achievement is flexible. Few empirical studies have attempted to examine how performance based planning has been conceptualised and implemented in practice. Existing literature is predominately anecdotal and consultant based (Baker et al. 2006) and has not sought to quantitatively examine how land use has been managed or determine how context influences implementation. The Integrated Planning Act 1997 (IPA) operated as Queensland’s principal planning legislation between March 1998 and December 2009. The IPA prevented Local Governments from prohibiting development or use and the term zone was absent from the legislation. While the IPA did not use the term performance based planning, the system is widely considered to be performance based in practice (e.g. Baker et al. 2006; Steele 2009a, 2009b). However, the degree to which the IPA and the planning system in Queensland is performance based is debated (e.g. Yearbury 1998; England 2004). Four research questions guided the research framework using Queensland as the case study. The questions sought to: determine if there is a common understanding of performance based planning; identify how performance based planning was expressed under the IPA; understand how performance based planning was implemented in plans; and explore the experiences of participants in the planning system. The research developed a performance adoption spectrum. The spectrum describes how performance based planning is implemented, ranging between pure and hybrid interpretations. An ex-post evaluation of seventeen IPA plans sought to determine plan performativity within the conceptual spectrum. Land use was examined from the procedural dimension of performance (Assessment Tables) and the substantive dimension of performance (Codes). A documentary analysis and forty one interviews supplemented the research. The analytical framework considered how context influenced performance based planning, including whether: the location of the local government affected land use management techniques; temporal variation in implementation exists; plan-making guidelines affected implementation; different perceptions of the concept exist; this type of planning applies to a range of spatial scales. Outcomes were viewed as the medium for determining the acceptability of development in Queensland, a significant departure from pure approaches found in the United States. Interviews highlighted the absence of plan-making direction in the IPA, which contributed to the confusion about the intended direction of the planning system and the myth that the IPA would guarantee a performance based system. A hybridised form of performance based planning evolved in Queensland which was dependent on prescriptive land use zones and specification of land use type, with some local governments going to extreme lengths to discourage certain activities in a predetermined manner. Context had varying degrees of influence on plan-making methods. Decision-making was found to be inconsistent and the system created a range of unforeseen consequences including difficulties associated with land valuation, increased development speculation, and the role of planners in court was found to be less critical than in the previous planning system.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a nonstandard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (geometry of numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a non-standard scheme designed specifically for this purpose, or to have secure channels between shareholders. In contrast, we show how to increase the threshold parameter of the standard CRT secret-sharing scheme without secure channels between the shareholders. Our method can thus be applied to existing CRT schemes even if they were set up without consideration to future threshold increases. Our method is a positive cryptographic application for lattice reduction algorithms, and we also use techniques from lattice theory (geometry of numbers) to prove statements about the correctness and information-theoretic security of our constructions.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a non-standard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (Geometry of Numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.
Resumo:
A crucial issue with hybrid quantum secret sharing schemes is the amount of data that is allocated to the participants. The smaller the amount of allocated data, the better the performance of a scheme. Moreover, quantum data is very hard and expensive to deal with, therefore, it is desirable to use as little quantum data as possible. To achieve this goal, we first construct extended unitary operations by the tensor product of n, n ≥ 2, basic unitary operations, and then by using those extended operations, we design two quantum secret sharing schemes. The resulting dual compressible hybrid quantum secret sharing schemes, in which classical data play a complementary role to quantum data, range from threshold to access structure. Compared with the existing hybrid quantum secret sharing schemes, our proposed schemes not only reduce the number of quantum participants, but also the number of particles and the size of classical shares. To be exact, the number of particles that are used to carry quantum data is reduced to 1 while the size of classical secret shares also is also reduced to l−2 m−1 based on ((m+1, n′)) threshold and to l−2 r2 (where r2 is the number of maximal unqualified sets) based on adversary structure. Consequently, our proposed schemes can greatly reduce the cost and difficulty of generating and storing EPR pairs and lower the risk of transmitting encoded particles.
Resumo:
In 2005, Ginger Myles and Hongxia Jin proposed a software watermarking scheme based on converting jump instructions or unconditional branch statements (UBSs) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and integrity checks change and the target address will not be computed correctly. In this paper, we present an attack based on tracking stack pointer modifications to break the scheme and provide implementation details. The key element of the attack is to remove the fingerprint and integrity check generating code from the program after disassociating the target address from the fingerprint and integrity value. Using the debugging tools that give vast control to the attacker to track stack pointer operations, we perform both subtractive and watermark replacement attacks. The major steps in the attack are automated resulting in a fast and low-cost attack.
Resumo:
This project analyses and evaluates the integrity assurance mechanisms used in four Authenticated Encryption schemes based on symmetric block ciphers. These schemes are all cross chaining block cipher modes that claim to provide both confidentiality and integrity assurance simultaneously, in one pass over the data. The investigations include assessing the validity of an existing forgery attack on certain schemes, applying the attack approach to other schemes and implementing the attacks to verify claimed probabilities of successful forgeries. For these schemes, the theoretical basis of the attack was developed, the attack algorithm implemented and computer simulations performed for experimental verification.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes with multiple queues and multiple grades of service. We present a closed-loop multi-layered pricing scheme and propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. This is different from most adaptive pricing schemes in the literature that do not obtain a closed-loop state dependent pricing policy. The method that we propose finds optimal price levels that are functions of the queue lengths at individual queues. Further, we also propose a variant of the above scheme that assigns prices to incoming packets at each node according to a weighted average queue length at that node. This is done to reduce frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using both of our schemes over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our first scheme exhibits a throughput improvement in the range of 67-82% among all routes over the above scheme. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
27 p.