5 resultados para Cryptography.
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '
Resumo:
Arvokasta tai luottamuksellista tietoa käsittelevien palveluiden, kuten pankki- ja kauppa-palveluiden, tarjoaminen julkisessa Internet-verkossa on synnyttänyt tarpeen vahvalle todennukselle, eli käyttäjien tunnistuksen varmistamiselle. Vahvassa todennuksessa käytetään salaus-menetelmien tarjoamia keinoja todennus-tapahtuman tieto-turvan parantamiseen heikkoihin todennusmenetelmiin nähden. Todennusta käyttäjätunnus-salasana-yhdistelmällä voidaan pitää heikkona menetelmänä. Julkisen avaimen järjestelmän varmenteita voidaan käyttää WWW-ympäristössä toimivissa palveluissa yhteyden osapuolten todentamiseen. Tässä työssä suunniteltiin vahva käyttäjän todennus julkisen avaimen järjestelmällä WWW-ympäristössä tarjottavalle palvelulle ja toteutettiin palvelun tarjoavan sovelluksen komponentiksi soveltuva yksinkertainen varmentaja OpenSSL-salaustyökalupaketin avulla. Työssä käydään läpi myös salauksen perusteet, julkisen avaimen järjestelmä ja esitellään olemassaolevia varmentajatoteutuksia ja mahdollisia tieto-turva-uhkia Vahva todennus tulee suunnitella siten, että palvelun käyttäjä ymmärtää, mikä tarkoitus hänen toimillaan on ja miten ne edistävät tietoturvaa. Internet-palveluissa käyttäjän vahva todennus ei ole yleistynyt huonon käytettävyyden vuoksi.
Resumo:
Lyhyen kantaman radiotekniikoiden hyödyntäminen mahdollistaa uudenlaisten paikallisten palveluiden käytön ja vanhojen palveluiden kehittämisen. Kulunvalvonta on päivittäisenä palveluna valittu työn esimerkkisovellukseksi. Useita tunnistus- ja valtuutustapoja tutkitaan, ja julkisen avaimen infrastruktuuri on esitellään tarkemmin. Langattomat tekniikat Bluetooth, Zigbee, RFID ja IrDA esitellän yleisellä tasolla langattomat tekniikat –luvussa. Bluetooth-tekniikan rakennetta, mukaan lukien sen tietoturva-arkkitehtuuria, tutkitaan tarkemmin. Bluetooth-tekniikkaa käytetään työssä suunnitellun langattoman kulunvalvontajärjestelmän tietojen siirtoon. Kannettava päätelaite toimii käyttäjän henkilökohtaisena luotettuna laitteena, jota voi käyttää avaimena. Käyttäjän tunnistaminen ja valtuuttaminen perustuu julkisen avaimen infrastruktuuriin. Ylläpidon allekirjoittamat varmenteet sisältävät käyttäjän julkisen avaimen lisäksi tietoa hänestä ja hänen oikeuksistaan. Käyttäjän tunnistaminen kulunvalvontapisteissä tehdään julkisen ja salaisen avaimen käyttöön perustuvalla haaste-vastaus-menetelmällä. Lyhyesti, järjestelmässä käytetään Bluetooth-päätelaitteita langattomina avaimina.
Resumo:
After introducing the no-cloning theorem and the most common forms of approximate quantum cloning, universal quantum cloning is considered in detail. The connections it has with universal NOT-gate, quantum cryptography and state estimation are presented and briefly discussed. The state estimation connection is used to show that the amount of extractable classical information and total Bloch vector length are conserved in universal quantum cloning. The 1 2 qubit cloner is also shown to obey a complementarity relation between local and nonlocal information. These are interpreted to be a consequence of the conservation of total information in cloning. Finally, the performance of the 1 M cloning network discovered by Bužek, Hillery and Knight is studied in the presence of decoherence using the Barenco et al. approach where random phase fluctuations are attached to 2-qubit gates. The expression for average fidelity is calculated for three cases and it is found to depend on the optimal fidelity and the average of the phase fluctuations in a specific way. It is conjectured to be the form of the average fidelity in the general case. While the cloning network is found to be rather robust, it is nevertheless argued that the scalability of the quantum network implementation is poor by studying the effect of decoherence during the preparation of the initial state of the cloning machine in the 1 ! 2 case and observing that the loss in average fidelity can be large. This affirms the result by Maruyama and Knight, who reached the same conclusion in a slightly different manner.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.