46 resultados para subset sum problems

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lecture given in Helsinki at the invitation of the Finnish Mathematical Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diplomityön tutkimusfunktio käsittelee toimeksiantajan, Stora Enso Timber Oy Ltd Kotkan sahan, sahalinjan optimointikokonaisuuden relevanttia ongelmakenttää. Tutkimuksen alussa profiloivan sahalinjan tukinpyörityksen, pelkan sivuttaissiirron tai sivulautaoptimoinnin toimivuudesta ei ollut varmuutta. Työn painopistealue on sivulautaoptimointi, jonka toimivuus tutkimuksen alussa on hyvin kyseenalaista tuotantoajossa. Työn tavoitteet kiteytyvät paremman raaka-aineen käyttösuhteen saavuttamiselle, jolloin pitkän aikajänteen kannattavuus on realistisempaa saavuttaa. Kotkan sahalinjan optimointijärjestelmässä on kokonaisuudessaan saavutettu tuotantoajoon hyväksyttävä taso. Tukinpyörityksen tarkkuudessa saavutettiin asetettu tavoite, eli 90 % pyöritystuloksista menee virheikkunaan ± 10° , sekä virheen keskiarvon itseisarvo jasen ympärillä olevan hajonnan summa on maksimissaan 10° . Pelkan sivuttaissiirto todettiin tutkimuksessa sekundääriseksi optimointijärjestelmäksi. Ohjaus perustuu tukkimittarin mittaamaan dataan, jolloin tukinpyörityksen hajonta aiheuttaa epätarkkuutta pelkan suuntauksessa. Pelkan sivuttaisiirron käyttäminen vaatii lisämittauksia, jolloin voidaan varmistua pelkan suuntauksen optimoinnin toimivuudesta. Sivulautaoptimoinnin toimivuuden kehittämisessä saavutettiin se taso, missä todellista kehitystyötä voidaan tehdä. Koeajoissa ja optimointiohjelman tarkastamisessa havaittiin periaatteellisia virheitä, jotka korjattiin. Toimivan sivulautaoptimoinnin myötä on mahdollista hallita paremmin tuotannonohjaus, jolloin tuotanto voidaan etenkin sivulautojen osalta kohdentaa paremmin vastaamaan kysyntää sekä asete-erän hyvää käyttösuhdetta. Raaka-aineen käyttösuhde on parantunut. Yksittäisten asetevertailujen sekä esimerkkilaskelmien perusteella tuottopotentiaali tukinpyörityksen ja sivulautaoptimoinnin osalta on 0,6...1,5 MEUR Välillinen tuottopotentiaali on suurempi, koska tuotantoprosessi sahauksen osalta on erittäin joustava markkinoiden tarpeen muutoksille. Sahalinjalla on mahdollista tuottaa helposti laajalla tukkisumalla laajaa tuotematriisia, jossa sivulautaoptimoinnilla on avainrooli. Tuotannonsuunnittelufunktiota tulee kehittää vastaamaan mahdollisuuksiin,joita sivulautaoptimointi tarjoaa. Tuotematriisi ja sitä vastaavat asetteet lankeavalla tukkisumalla tulee rakentaa uudestaan niiltä osin, joihin sivulautaoptimointi antaa variaatiomahdollisuuksia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alkyl ketene dimers (AKD) are effective and highly hydrophobic sizing agents for the internal sizing of alkaline papers, but in some cases they may form deposits on paper machines and copiers. In addition, alkenyl succinic anhydrides (ASA)- based sizing agents are highly reactive, producing on-machine sizing, but under uncontrolled wet end conditions the hydrolysis of ASA may cause problems. This thesis aims at developing an improved ketene dimer based sizing agent that would have a lower deposit formation tendency on paper machines and copiers than a traditional type of AKD. The aim is also to improve the ink jet printability of a AKD sized paper. The sizing characteristics ofketene dimers have been compared to those of ASA. A lower tendency of ketene dimer deposit formation was shown in paper machine trials and in printability tests when branched fatty acids were used in the manufacture of a ketene dimer basedsizing agent. Fitting the melting and solidification temperature of a ketene dimer size to the process temperature of a paper machine or a copier contributes to machine cleanliness. A lower hydrophobicity of the paper sized with branched ketene dimer compared to the paper sized with traditional AKD was discovered. However, the ink jet print quality could be improved by the use of a branched ketene dimer. The branched ketene dimer helps in balancing the paper hydrophobicity for both black and color printing. The use of a high amount of protective colloidin the emulsification was considered to be useful for the sizing performance ofthe liquid type of sizing agents. Similar findings were indicated for both the branched ketene dimer and ASA.